-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
spikes in requests/sec reporting under high number of requests #4
Comments
From @jchauncey on June 6, 2017 21:44 yeah so we have to parse the nginx logs to determine the requests/sec. So we parse the data in a custom fluentd plugin and push taht data into nsq where its picked up by telegraf and pushed into influx. So it makes sense that those values are pretty close to the same. |
From @felipejfc on June 6, 2017 21:57 I don't know if we are on the same page here, what I'm saying is: this is wrong, we don't have this kind of spike in our normal traffic pattern... to illustrate, this is how it looks after restarting all deis-fluentd pods: this is normal (no spikes.) |
From @jchauncey on June 6, 2017 22:51 So you're saying that the spikes on the messages/second graph aren't normal? |
From @felipejfc on June 7, 2017 0:32 nope, they aren't normal :/ |
From @felipejfc on June 7, 2017 14:41 @jchauncey looking at nsq logs, it seems that the peaks are related to this events:
|
From @jchauncey on June 7, 2017 14:50 Are you seeing that set of messages occur frequently in your logs? Or does it just happen once and then you see the spike? |
From @felipejfc on June 7, 2017 14:52 it happens frequently and then I have frequent spikes in the graphs, it seems that right after a message of this kind, a spike occurs, it does make sense, I guess that the fluentd pod loses connection to nsq and then when it restablishes it, it sends the messages that are queued... |
From @jchauncey on June 7, 2017 14:54 That sounds like a pretty good conclusion. Is this worrying to you? Is it causing any issues with the system itself? |
From @felipejfc on June 7, 2017 14:56 not with the system, but the "Requests Per Second" graph is pretty important for us to monitor Deis, we have a television in our room with just the graph on it... we also have set up alarms on the number of requests to detect anomalies, so this kind of spikes messes all our monitoring :/ |
From @jchauncey on June 7, 2017 14:59 Ah ok I understand. I wonder if there is something we could do in grafana to smooth out the graph? |
From @felipejfc on June 7, 2017 15:7 I guess we could but I'ld rather work on the root of the problem like figuring out why fluentd pods are losing connection to nsq and then solving it... |
From @jchauncey on June 7, 2017 15:34 Alright im going to take a look and see if i can reproduce this on my cluster and maybe get you a test build to try out. |
From @felipejfc on June 7, 2017 16:46 ok! thanks! |
From @jchauncey on June 7, 2017 19:1 So one thing we dont do right now is assign timestamps to the metrics when we create them (which is done in fluentd). This is because I could never get influxdb to accept the right precision from the values coming off of nsq so I decided to let it add that value at write time. This means that if you sent 1000 extra messages to influx that happened at an earlier time they would still have a timestamp of |
From @felipejfc on June 7, 2017 19:54 it makes total sense to send the timestamp with each metric, but still I'ld like to figure out why some of the fluentd pods are reaching this state (timing out over and over again) |
From @jchauncey on June 7, 2017 19:55 the pod network isnt super reliable in my experience (but that doesnt account for much here). Thats why we backed all this with a queue to help with those types of problems |
From @felipejfc on June 6, 2017 21:41
Deleting all deis-fluentd pods solves the problem for like 10 minutes or so...
My guess is that this is related to the number of metrics/sec that is being sent from each host to nsq, it's like 2000 metrics/sec.
Or maybe the number of logs fluentd is parsing?
Copied from original issue: deis/fluentd#99
The text was updated successfully, but these errors were encountered: