Skip to content

Conversation

@TD-er
Copy link
Collaborator

@TD-er TD-er commented Jul 18, 2025

As discussed here: #29 (comment)

Not yet tested, as I really need to get some sleep now...

My biggest concern right now is whether writeNextMsgId might also need to call flushBuffer() first.
But as I said, can't think clearly right now so if you can have a look @hmueller01

@hmueller01
Copy link
Owner

I think this can't be implemented like this.

size_t PubSubClient::write(const uint8_t* buffer, size_t size) {
    const size_t rc = appendBuffer(buffer, size);
    if (rc != 0) {
        lastOutActivity = millis();
    }
    return rc;
}

We can't set lastOutActivity here, as appendBuffer() does not necessarily write to the the broker. And we need this to ping the broker in time ...

if (keepAliveMillis && ((t - lastInActivity > this->keepAliveMillis) || (t - lastOutActivity > this->keepAliveMillis))) {

But setting lastOutActivity in flushBuffer() only, also won't work in all cases (e.g. if the application takes to long to fill all the needed data), as here

this->buffer[0] = MQTTPINGREQ;
we will overwrite the buffer to send the MQTTPINGREQ.
Maybe we can just flushBuffer() here, check the rc and only if it's 0 do the MQTTPINGREQ.

To your writeNextMsgId() concern. I don't think there is a problem. writeNextMsgId() is used in endPublish() there you did the flushBuffer() before and in subscribe() / unsubscribe() - don't care ...

@TD-er
Copy link
Collaborator Author

TD-er commented Jul 19, 2025

Well since appendBuffer may call flushBuffer, which does actual communications (and thus should set the lastOutActivity), I agree that appendBuffer itself and thus also these write calls should not set this lastOutActivity .

@TD-er
Copy link
Collaborator Author

TD-er commented Jul 19, 2025

I was thinking... maybe all _client.write calls should use this appendBuffer, just to make sure there isn't any risk of mixing up different calls.

@hmueller01
Copy link
Owner

This was my first thought too. But then more things need to be rewritten ...

@hmueller01
Copy link
Owner

And yes, at least here

rc += _client->write((uint8_t)pgm_read_byte_near(payload + i));

we should use it as well to solve #36 ...

@hmueller01
Copy link
Owner

@TD-er Did some implementations mentioned above. Looks quite promising. If you have time you may have a look over it.

@hmueller01
Copy link
Owner

@TD-er We could merge appendBuffer(uint8_t data)

size_t PubSubClient::appendBuffer(uint8_t data) {

into
size_t PubSubClient::write(uint8_t data) {
return appendBuffer(data);
}

and remove appendBuffer(uint8_t data). What do you think? Might be less clear ...

@hmueller01
Copy link
Owner

This PR fixes #29.

@hmueller01
Copy link
Owner

This fixes #36

@hmueller01
Copy link
Owner

@TD-er If you could take a look over it, I'd like to merge this. Tested it for a few weeks on a device of mine (but only uses publish_P()).

@hmueller01
Copy link
Owner

@TD-er Is there any chance to finish this PR from your side? I would love to implement some other open issues but this might make extra work merging here ...

@TD-er
Copy link
Collaborator Author

TD-er commented Oct 16, 2025

I will have a look

*/
size_t PubSubClient::writeBuffer(size_t pos, size_t size) {
size_t rc = 0;
if (size > 0 && pos + size <= this->bufferSize) {
Copy link
Collaborator Author

@TD-er TD-er Oct 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know I mentioned it before and it is very likely to be correct C++ syntax, but can you mark either side of the && in braces?

if ((size > 0) && ((pos + size) <= this->bufferSize))

makes it easier to read as a human, also because of color matching braces in the IDE.

Same for line 573 and probably others too.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. Can do this.

@TD-er
Copy link
Collaborator Author

TD-er commented Oct 16, 2025

OK, I loaded the files into my (ESPEasy) project and it builds just fine, so that's a plus ;)

Also normal MQTT operations do seem to work just fine.

Right now I don't have the setup here to dump large chunks, so that's a bit hard to test right now.

Just as a cosmetic remark... There is no consistency in naming of member variables.
I would propose to start all members with an _

I don't see any obvious mistakes, so LGTM

@hmueller01
Copy link
Owner

Just as a cosmetic remark... There is no consistency in naming of member variables.
I would propose to start all members with an _

Yeah, that's what I want to do next ... see #60.

@hmueller01
Copy link
Owner

Hm, yes. Testing with big data is a good point. Maybe I can create a unit test for that ...

if (!readByte(&digit)) return 0;
if (this->stream) {
if (isPublish && idx - *hdrLen - 2 > skip) {
if (isPublish && (idx - *hdrLen - 2 > skip)) {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also the sum before the compare...

@hmueller01
Copy link
Owner

Unit test runs green

  • publishes with long payload message (> buffer size) ✓

With a simple code I just send 6k payload data ...

/**
 * @brief Publish a file from LittleFS to MQTT topic using beginPublish/write/endPublish
 */
void mqttPublishFile(const char *lfs_path, const char *mqtt_topic) {
  if (!LittleFS.exists(lfs_path)) {
    ERRORF_P("%s: LittleFS file not found: %s" LF, __func__, lfs_path);
    return;
  }
  File f = LittleFS.open(lfs_path, "r");
  if (!f) {
    ERRORF_P("%s: Failed to open file: %s" LF, __func__, lfs_path);
    return;
  }

  size_t remaining = f.size();
  INFOF_P("%s: Publishing file %s (%u bytes) to topic %s" LF, __func__, lfs_path, (unsigned)remaining, mqtt_topic);

  if (!m_mqtt_client.beginPublish(mqtt_topic, remaining, false)) {
    ERRORF_P("%s: beginPublish failed for topic %s" LF, __func__, mqtt_topic);
    f.close();
    return;
  }

  const size_t BUF_SZ = 512;
  uint8_t buf[BUF_SZ];
  while (remaining > 0) {
    size_t toRead = (remaining > BUF_SZ) ? BUF_SZ : remaining;
    int r = f.read(buf, toRead);
    if (r <= 0) break;
    m_mqtt_client.write(buf, r);
    remaining -= r;
  }

  m_mqtt_client.endPublish();
  f.close();
}
    // publish LittleFS file "/index..html" to topic "test"
    mqttPublishFile("/index.html", "test");

mqttPublishFile: Publishing file /index.html (6625 bytes) to topic test

So everything look good for me as well. Merging that.

@hmueller01 hmueller01 merged commit 806b3df into master Oct 17, 2025
1 check passed
@TD-er TD-er deleted the feature/optimize_large_publish branch October 17, 2025 19:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants