Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HID: Avoid repeated error messages from hid_write()/hid_read() #13692

Merged
merged 3 commits into from
Nov 3, 2024

Conversation

daschuer
Copy link
Member

This is fixed the huge log file issue with #13660 for log spam in any disconnect case, without fixing the root cause in that particular case.

Copy link
Member

@acolombier acolombier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thanks

Do we think this is a sufficient mitigation for the log file size till we have a proper limiter in place?

<< mixxx::convertWCStringToQString(
hid_error(pHidDevice),
kMaxHidErrorMessageSize);
m_hidWriteErrorLogged = true; // Don't log the same error again
Copy link
Member

@JoergAtGithub JoergAtGithub Sep 25, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will not work as the comment describes. To ensure that only identical errors are inhibited, you need to compare the error message string.

<< mixxx::convertWCStringToQString(
hid_error(m_pHidDevice),
kMaxHidErrorMessageSize);
m_hidReadErrorLogged = true; // Don't log the same error again
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here

@daschuer
Copy link
Member Author

I have adjusted the error message. We want also stop log spam in the unlikely case that there are altering different error messages, because this would still fill the log file.

@daschuer
Copy link
Member Author

Do we think this is a sufficient mitigation for the log file size till we have a proper limiter in place?

I consider this a permanent solution, because we also want to avoid the performance impact, until the 100 MB log limit is from #13684 is reached.

Copy link
Member

@JoergAtGithub JoergAtGithub left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hid_write is also used here:

int result = hid_write(pHidDevice,
reinterpret_cast<const unsigned char*>(reportToSend.constData()),
reportToSend.size());
if (result == -1) {
qCWarning(logOutput) << "Unable to send data to" << deviceInfo.formatName() << ":"
<< mixxx::convertWCStringToQString(
hid_error(pHidDevice),
kMaxHidErrorMessageSize);
}

Comment on lines 105 to 109
qCWarning(m_logOutput) << "Unable to read buffered HID InputReports from"
<< m_deviceInfo.formatName() << ":"
<< mixxx::convertWCStringToQString(
hid_error(m_pHidDevice),
kMaxHidErrorMessageSize);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
qCWarning(m_logOutput) << "Unable to read buffered HID InputReports from"
<< m_deviceInfo.formatName() << ":"
<< mixxx::convertWCStringToQString(
hid_error(m_pHidDevice),
kMaxHidErrorMessageSize);
qCWarning(m_logOutput) << "Unable to read buffered HID InputReports from"
<< m_deviceInfo.formatName() << ":"
<< mixxx::convertWCStringToQString(
hid_error(m_pHidDevice),
kMaxHidErrorMessageSize)
<< "Note that, this message is only logged once and may not appear again until all hid_read errors have disappeared.";

@daschuer
Copy link
Member Author

daschuer commented Oct 6, 2024

Done

@JoergAtGithub JoergAtGithub added this to the 2.4.2 milestone Oct 21, 2024
@JoergAtGithub
Copy link
Member

Works as expected. Thank you!

@JoergAtGithub JoergAtGithub merged commit cd59df2 into mixxxdj:2.4 Nov 3, 2024
14 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants