v1.5: Logging support #929
Replies: 4 comments
-
Just a note on " It would be ideal to use the Collector to do log scraping as well" I believe that was the intention of the |
Beta Was this translation helpful? Give feedback.
-
I'm curious why the native collector log scraping isn't considered robust enough? My company uses it every day with the paradigm that the logs are written to stdout/stderr as appropriate, and then we collect from those. In fact, I'm doing it with this demo using a daemonset of additional collectors, and one more collector as a deployment. I'm forwarding from the embedded collector to this collector via OTLP, and doing a ton of parsing inside of OTel. This allows me to collect everything that is available, and make use of it when showing off our custom OTel collector, and our custom management software. It also allows me to correct a bunch of "bad juju" going on with the logs currently output via collector services. Such as info logs being sent to stderr. Examples (using our company's management platform "BindPlane" for OTel, parsed out to body in order to become a jsonPayload in Google Cloud Logging):
In point of fact, I consider that the biggest problem here is that spand/trace IDs are not present on most of the logs that should have them. Now, how are we gathering these logs in? Simple, using a custom plugin wrapper for the filelog receiver. Since we wrote the filelog receiver, our company is very familiar with it. Here is my receiver configuration:
The plugin source can be found here: So, collecting logs can and does work. It just requires more collectors - since it requires one running in a pod on each node. Is it as elegant as we would like? Probably not. However, it works very well, and we have several of our customers using it. |
Beta Was this translation helpful? Give feedback.
-
I would argue, that instead of involving fluent bit or promtail, you could just include an OTel collector on every application service pod to scrape logs and forward them to the primary collector over the already established OTLP receiver. This is, in my mind, the most elegant solution and uses OTel everywhere appropriate. If you're just scraping a file, the filelog receiver works amazingly well. |
Beta Was this translation helpful? Give feedback.
-
On final thought, sorry for the multiple posts: |
Beta Was this translation helpful? Give feedback.
-
As adopting logging telemetry grows within OpenTelemetry, the demo team will add full support for this telemetry type with a backend and visualization mechanism for the data.
OpenSearch was recommended for the logging backend during the last SIG meeting (June 3rd, 2023). Though OpenSearch is not a CNCF project, it is well-adopted in the observability space and has a strong OSS community supporting it.
The recommendation is to use Fluent Bit to scrap actual logs from docker services and Kubernetes pods. The fluentforwardreceiver would be leveraged with the Collector to process and export the logging telemetry. It would be ideal to use the Collector to do log scraping as well, but the as of today, support for scaping docker and Kubernetes isn't robust enough to be used for the Demo. As those capabilities mature, we will revisit using the Collector for log scraping.
Beta Was this translation helpful? Give feedback.
All reactions