Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Startup error: Failed to launch chrome #4

Open
lmachens opened this issue Feb 16, 2018 · 9 comments
Open

Startup error: Failed to launch chrome #4

lmachens opened this issue Feb 16, 2018 · 9 comments

Comments

@lmachens
Copy link

Hey thx for this package :)
I get this error when trying out autoscale:

2018-02-16 08:45:00+01:00Error
vy32b
2018-02-16 08:45:00+01:00Failed to launch chrome!
vy32b
2018-02-16 08:45:00+01:00/app/bundle/programs/server/npm/node_modules/meteor/avariodev_galaxy-autoscale/node_modules/puppeteer/.local-chromium/linux-515411/chrome-linux/chrome: error while loading shared libraries: libXcomposite.so.1: cannot open shared object file: No such file or directory
vy32b
2018-02-16 08:45:00+01:00
vy32b
2018-02-16 08:45:00+01:00
vy32b
2018-02-16 08:45:00+01:00TROUBLESHOOTING: https://github.com/GoogleChrome/puppeteer/blob/master/docs/troubleshooting.md
vy32b
2018-02-16 08:45:00+01:00
vy32b
2018-02-16 08:45:00+01:00Error: Failed to launch chrome!
vy32b
2018-02-16 08:45:00+01:00/app/bundle/programs/server/npm/node_modules/meteor/avariodev_galaxy-autoscale/node_modules/puppeteer/.local-chromium/linux-515411/chrome-linux/chrome: error while loading shared libraries: libXcomposite.so.1: cannot open shared object file: No such file or directory
vy32b
2018-02-16 08:45:00+01:00
vy32b
2018-02-16 08:45:00+01:00
vy32b
2018-02-16 08:45:00+01:00TROUBLESHOOTING: https://github.com/GoogleChrome/puppeteer/blob/master/docs/troubleshooting.md
vy32b
2018-02-16 08:45:00+01:00
vy32b
2018-02-16 08:45:00+01:00 at onClose (/app/bundle/programs/server/npm/node_modules/meteor/avariodev_galaxy-autoscale/node_modules/puppeteer/lib/Launcher.js:211:14)
vy32b
2018-02-16 08:45:00+01:00 at Interface.helper.addEventListener (/app/bundle/programs/server/npm/node_modules/meteor/avariodev_galaxy-autoscale/node_modules/puppeteer/lib/Launcher.js:200:50)
vy32b
2018-02-16 08:45:00+01:00 at emitNone (events.js:111:20)
vy32b
2018-02-16 08:45:00+01:00 at Interface.emit (events.js:208:7)
vy32b
2018-02-16 08:45:00+01:00 at Interface.close (readline.js:370:8)
vy32b
2018-02-16 08:45:00+01:00 at Socket.onend (readline.js:149:10)
vy32b
2018-02-16 08:45:00+01:00 at emitNone (events.js:111:20)
vy32b
2018-02-16 08:45:00+01:00 at Socket.emit (events.js:208:7)
vy32b
2018-02-16 08:45:00+01:00 at endReadableNT (_stream_readable.js:1055:12)
vy32b
2018-02-16 08:45:00+01:00 at _combinedTickCallback (internal/process/next_tick.js:138:11)
vy32b
2018-02-16 08:45:00+01:00 at process._tickCallback (internal/process/next_tick.js:180:9)
vy32b
2018-02-16 08:45:00+01:00 => awaited here:
vy32b
2018-02-16 08:45:00+01:00 at Function.Promise.await (/app/bundle/programs/server/npm/node_modules/meteor/promise/node_modules/meteor-promise/promise_server.js:56:12)
vy32b
2018-02-16 08:45:00+01:00 at Promise.asyncApply (packages/avariodev:galaxy-autoscale/lib/autoscale.js:16:15)
vy32b
2018-02-16 08:45:00+01:00 at /app/bundle/programs/server/npm/node_modules/meteor/promise/node_modules/meteor-promise/fiber_pool.js:43:40
vy32b
2018-02-16 08:45:00+01:00(node:7) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): ReferenceError: page is not defined
vy32b
2018-02-16 08:45:00+01:00(node:7) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

I installed this package and start it like this:

GalaxyAutoScale.config({
      appUrl: Meteor.settings.galaxy.appUrl,
      username: Meteor.settings.galaxy.username,
      password: Meteor.settings.galaxy.password,
      scalingRules: {
        containersMin: 1,
        containersMax: 6,
        connectionsPerContainerMax: 120,
        connectionsPerContainerMin: 80
      },
      alertRules: {
        cpuPercentageMax: 80
      }
    });

    GalaxyAutoScale.addSyncedCronJob();

    GalaxyAutoScale.startSyncedCron(); // The README.md says GalaxyAutoScale.start(), but I think this is correct

I am using the default docker galaxy image.

@jehartzog
Copy link
Owner

It's ironic you're using this package, as your package was one of the reasons I felt safe switching from Galaxy over to AWS!

What version of Meteor are you running? I had some major issues when I upgraded from 1.5.x to 1.6.x. I could no longer get phantomjs and webdriverio running properly after updating. I wasn't able to trace down what changed, and rather than spend a lot of time working with a already poor solution, I attempted to use puppeteer.

While it worked well on dev, once again I couldn't get it up and running on Galaxy itself due to missing dependencies than were not installed properly via npm install.

After doing some testing, I realized I was getting roughly 5x the performance on AWS t2 instances as I was on a equal sized Galaxy instance for a fraction of the cost, so I decided to move completely away from Galaxy.

I would try v2.2.0 if you're running a Meteor app that's < v1.6.

I should probably revert this repo back to the latest version I know was working with Meteor 1.5.X and add a depreciation notice to the readme. I'll do that shortly.

You're right about GalaxyAutoScale.startSyncedCron();, thanks for pointing that out!

@lmachens
Copy link
Author

:) I noticed that too.
Maybe offtopic, but are you happy with aws?
What about autoscaling and how do you deploy there?
Would be thankful for some resources, because I am not that satisfied with galaxy so far. It is expensive and slow...

Meteor v1.6.1 running, so this might be the issue.

@jehartzog
Copy link
Owner

I'm extremely happy with AWS. I wrote a detailed article about how much effort I had to put in to optimizing my app in order to run well on Galaxy.

Look at this comparison of the same app with almost the same number of users:

Galaxy:

skoolers-peak-post-oplog

AWS:

screen shot 2018-02-16 at 12 08 42 pm

Just look at the massively improve response times! And running on far cheaper machines.

There is a lot more than just those graphs though. Under Galaxy, the CPU get's capped at hard at 15%, which means all those servers running close to 5% are 33% of the way from being hard limited. Whenever this limit is hit, it results in drastic spike in latency, which tends to cascade since Meteor waits for methods/pubs to execute in order. So I needed to auto-scale out up to 5 small containers to handle a few hundred active users.

With AWS t2 instances, they can burst for a sustained period of time before being capped, so I never need to worry about sudden slowdowns.

As far as my setup, I put them on a network load balancer with two t2.micro instances. I could probably run everything cheaper on a single t2.micro or t2.small now, but having a load balancer makes me feel safer and do almost zero-downtime updates using MUP. I don't need to use a scaler anymore as I have far more RAM, and CPU's that can burst for hours without getting capped. If I needed to I feel it would be simply enough to create an auto scaling group with AWS though.

Oddly enough the most expensive thing about my new setup is running APM. I had to use a t2.micro unlimited instance, and the CPU is pretty steady at ~80-90%.

@jehartzog
Copy link
Owner

I just remembered I wrote a post back on Oct but didn't publish it cause it was rather critical of Galaxy. I love Meteor, but am extremely dissatisfied with what Galaxy provides. Besides the issues listed above, I have numerous major outages caused by Galaxy without significant assurances they would not reoccur.

Now that I've fully tested out AWS and have seen how much better it is, I'm comfortable posting this now.

https://www.erichartzog.com/blog/meteor-galaxy-not-production-ready

@jehartzog
Copy link
Owner

I've been meaning to write a more detailed post about transitioning from Galaxy to AWS, so you inspired me to ahead and do it.

https://www.erichartzog.com/blog/aws-vs-galaxy-for-meteor-hosting

@lmachens
Copy link
Author

lmachens commented Feb 16, 2018

Thx for your articles!! I was hosting on azure with mup deployment before. But it was not easy to scale. I really like the easy docker scaling in galaxy, but the autoscaling is missing. So far as I understand your setup, you don't have autoscaling (because burst is enough)?
The total sessions of my app vary from 130 in the morning to 500 in the evening. And sometimes even more on the weekend, so I need an easy solution for automatic scaling.

Do you host your database in the same datacenter/region? Maybe it is related to your different pub/sub response times.

@jehartzog
Copy link
Owner

Everything I host is us-east, so that's where my mlab mongo db is (aws us-east), and my galaxy should have been us-east-1 as well.

You're correct, under galaxy I needed scaling since I would need up to 3-4 compact containers just to handle ~200 active users. Running that 24/7 would be a waste, since all my customers are in the same time zone and only active for ~16 hours a day.

Under aws I set up the minimum 'production' setup I'm comfortable with, which is two t2.micro instances behind a load balancer, and I found out that I no longer need to scale up, the better performance of AWS means those two webservers can handle my entire customer base. Because the t2.micro instances have 1 GB of memory each, they can handle a larger amount of connected clients. And because the CPU bursts way higher for methods/pubs, there are not the same latency spikes as CPU approaches 'baseline'.

I project that the current two servers could support at least 400 active users altogether before I'd start worrying about resource exhaustion.

If you have slightly more than that, just configure 3 t2.micro instances. They are only ~$9 a month for each additional webserver!!!

Plus when you know exactly how many instance you want long term, purchase them for a year and you drop the overall cost by ~40%.

@lmachens
Copy link
Author

Thank you again :). How does your setup works when you deploy a new version?
If you deploy a new version server by server, all connections will be redirected to the existing or the new container. A user connected to server 1 will be connected to server 2/3 if server 1 is restarting because of a new version. After that server 2 is restarted and the user will connect to server 3/1. So you will have multiple reconnections -> created observers. I had issues when deploying a new version with mup when to many users were online.

I am using ZEIT now for a few days now + MongoDB Atlas.
Deployment is easy with meteor-now.

It is much cheaper than galaxy and has autoscale build in. But the load balancer is not working very good:
image

Response times are good (Galaxy was good for me too).
They use alias to change to avoid spikes on new deployments. You deploy a new version and scale it up (you had 3 containers with old version -> 3 new containers with new version). Then you can change your domain alias to the new containers. I think galaxy is doing something similar.

@jehartzog
Copy link
Owner

You correctly describe what happens when I push an update. It is non-ideal, so I mainly try to update during off-peak hours. I also have enough spare CPU/memory on AWS that it is no longer a problem if all of my clients suddenly switch to only one server, it responds quickly enough.

I looked at ZEIT and was impressed by what it was offering. I did not pick it for the same reason I did not use nodechef, but I'm glad to hear it's generally working well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants