-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Startup error: Failed to launch chrome #4
Comments
It's ironic you're using this package, as your package was one of the reasons I felt safe switching from Galaxy over to AWS! What version of Meteor are you running? I had some major issues when I upgraded from 1.5.x to 1.6.x. I could no longer get phantomjs and webdriverio running properly after updating. I wasn't able to trace down what changed, and rather than spend a lot of time working with a already poor solution, I attempted to use puppeteer. While it worked well on dev, once again I couldn't get it up and running on Galaxy itself due to missing dependencies than were not installed properly via After doing some testing, I realized I was getting roughly 5x the performance on AWS t2 instances as I was on a equal sized Galaxy instance for a fraction of the cost, so I decided to move completely away from Galaxy. I would try v2.2.0 if you're running a Meteor app that's < v1.6. I should probably revert this repo back to the latest version I know was working with Meteor 1.5.X and add a depreciation notice to the readme. I'll do that shortly. You're right about |
:) I noticed that too. Meteor v1.6.1 running, so this might be the issue. |
I'm extremely happy with AWS. I wrote a detailed article about how much effort I had to put in to optimizing my app in order to run well on Galaxy. Look at this comparison of the same app with almost the same number of users: Galaxy:AWS:Just look at the massively improve response times! And running on far cheaper machines. There is a lot more than just those graphs though. Under Galaxy, the CPU get's capped at hard at 15%, which means all those servers running close to 5% are 33% of the way from being hard limited. Whenever this limit is hit, it results in drastic spike in latency, which tends to cascade since Meteor waits for methods/pubs to execute in order. So I needed to auto-scale out up to 5 small containers to handle a few hundred active users. With AWS t2 instances, they can burst for a sustained period of time before being capped, so I never need to worry about sudden slowdowns. As far as my setup, I put them on a network load balancer with two t2.micro instances. I could probably run everything cheaper on a single t2.micro or t2.small now, but having a load balancer makes me feel safer and do almost zero-downtime updates using MUP. I don't need to use a scaler anymore as I have far more RAM, and CPU's that can burst for hours without getting capped. If I needed to I feel it would be simply enough to create an auto scaling group with AWS though. Oddly enough the most expensive thing about my new setup is running APM. I had to use a t2.micro unlimited instance, and the CPU is pretty steady at ~80-90%. |
I just remembered I wrote a post back on Oct but didn't publish it cause it was rather critical of Galaxy. I love Meteor, but am extremely dissatisfied with what Galaxy provides. Besides the issues listed above, I have numerous major outages caused by Galaxy without significant assurances they would not reoccur. Now that I've fully tested out AWS and have seen how much better it is, I'm comfortable posting this now. https://www.erichartzog.com/blog/meteor-galaxy-not-production-ready |
I've been meaning to write a more detailed post about transitioning from Galaxy to AWS, so you inspired me to ahead and do it. https://www.erichartzog.com/blog/aws-vs-galaxy-for-meteor-hosting |
Thx for your articles!! I was hosting on azure with mup deployment before. But it was not easy to scale. I really like the easy docker scaling in galaxy, but the autoscaling is missing. So far as I understand your setup, you don't have autoscaling (because burst is enough)? Do you host your database in the same datacenter/region? Maybe it is related to your different pub/sub response times. |
Everything I host is us-east, so that's where my mlab mongo db is (aws us-east), and my galaxy should have been us-east-1 as well. You're correct, under galaxy I needed scaling since I would need up to 3-4 compact containers just to handle ~200 active users. Running that 24/7 would be a waste, since all my customers are in the same time zone and only active for ~16 hours a day. Under aws I set up the minimum 'production' setup I'm comfortable with, which is two t2.micro instances behind a load balancer, and I found out that I no longer need to scale up, the better performance of AWS means those two webservers can handle my entire customer base. Because the t2.micro instances have 1 GB of memory each, they can handle a larger amount of connected clients. And because the CPU bursts way higher for methods/pubs, there are not the same latency spikes as CPU approaches 'baseline'. I project that the current two servers could support at least 400 active users altogether before I'd start worrying about resource exhaustion. If you have slightly more than that, just configure 3 t2.micro instances. They are only ~$9 a month for each additional webserver!!! Plus when you know exactly how many instance you want long term, purchase them for a year and you drop the overall cost by ~40%. |
Thank you again :). How does your setup works when you deploy a new version? I am using ZEIT now for a few days now + MongoDB Atlas. It is much cheaper than galaxy and has autoscale build in. But the load balancer is not working very good: Response times are good (Galaxy was good for me too). |
You correctly describe what happens when I push an update. It is non-ideal, so I mainly try to update during off-peak hours. I also have enough spare CPU/memory on AWS that it is no longer a problem if all of my clients suddenly switch to only one server, it responds quickly enough. I looked at ZEIT and was impressed by what it was offering. I did not pick it for the same reason I did not use nodechef, but I'm glad to hear it's generally working well. |
Hey thx for this package :)
I get this error when trying out autoscale:
I installed this package and start it like this:
I am using the default docker galaxy image.
The text was updated successfully, but these errors were encountered: