-
-
Notifications
You must be signed in to change notification settings - Fork 475
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some research on middle performance #2110
Comments
We should move to creating the UNIQUE index after loading the data. It might not have sped up your test, but I believe it would on some hardware with a different number of threads. Additionally, the resulting index is properly balanced without dead tuples in it.
This would rule out ever having multiple threads writing to the middle at the same time. Do we want to do that? |
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
I did some further tests: All numbers reported here are for the ways table only. The simple COPY took 57 minutes. Tests were done using PostgreSQL 15. Supposedly there are some improvements in COPY performance in Pg 16, so we should also test with Pg 16 and/or 17. Using binary formatCOPY can be done with the text format (that we currently use) or a binary format. Using binary format the COPY time drops to 40 minutes, so we are about 30% faster. The binary format is not well documented and the documentation says it might change, but it hasn't changed since PostgreSQL 7 as far as I can see, so it is unlikely that this will be a problem. More so because we are only writing into the database, not reading, so we have more control over the format than if we have to parse the format the database generates. This should also save use some time on the osm2pgsql side, because generating the binary format is probably faster than the text format. For geometries which don't have to be hex encoded any more, the number of bytes transfered will be only half, that should also help. I can really see no downside, we should consider switching. Parallel COPYsI tried simulating parallel COPYs by splitting a COPY file into 2 (or 4) pieces and doing 2 (or 4) copies simultaneously. This is with the text format. The import times were 33 minutes (or 23 minutes for 4 COPYs). It is hard to tell how well this would work in practice, but it is definitly worth a shot. The CPU usage for the postgres process doing the COPY went from 100% with one COPY to something like 70%, so it looks like we are not CPU-bound any more but either I/O bound or hitting some limits on locks or so. This might make the data format on disk not so efficient though, maybe creating the index will take longer or usage of the resulting table is slightly slower. With FREEZEUsing COPY FREEZE the import takes 49 minutes, using FREEZE and binary format we are at 32 minutes, almost halfing the current time. Unfortunately using COPY FREEZE does not work together with parallel COPYs, because you need to create the table in the same transaction that you do the COPY FREEZE in. I tried using snapshot synchronization to overcome this, but it didn't work. Maybe I didn't do it right, but there is probably something in there that prevents this from working, my transactions always got rolled back when I tried doing this. Using COPY FREEZE would be simpler to implement that having multiple connections for parallel copies, so this is still something to consider. UNLOGGED TABLESomewhere on the Internet I found the suggestion to create the table as UNLOGGED TABLE, then do the import and the ALTER the table to LOGGED. This does not help. The COPY took about the same time and the ALTER TABLE took quite some time, so this is a dead end. |
I don't think parallel COPY FREEZE is possible with snapshot syncronization. The docs state
The starting place is syncronized, but they're still separate transactions which can diverge. Do we need to worry about the gains from freezing tuples at COPY time? We have to run |
We are currently running |
The advice I've had from PostgreSQL experts is that for optimal performance you should run VACUUM on a freshly-loaded table. It does maintenance work other than dead tuple cleanup. The visibility map, fsm are, and xid information are some to the things updated. However, although vacuum does important work, I'm not sure any of these really matter for the typical osm2pgsql workload. The docs state
1 does not matter because there are no updated or deleted rows at this point in time. 2 is taken care of by a separate ANALYZE. The queries we run against the middle (or rendering) tables don't work for index-only scans. 4 is a valid concern, regular auto-vacuums will take care of that, and osm2pgsql DBs don't normally have enough transactions/second for wraparound to matter. I still can't figure out if the FSM matters or not for us |
I did some further tests regarding VACUUM:
So not only is the |
I did some research on how fast the import into the middle tables is and how much we could possible improve performance. For that I compared actual osm2pgsql runs with running COPY "manually".
All the numbers are based on a single run, so take them with a grain of salt.
All experiments were done with the new middle (
--middle-database-format=new
) and without flat node files, i.e. all nodes were imported into the database.First I did an import with
--slim -O null
, i.e. without output tables to get the current baseline. Internally this will create the tables with a primary key constraint on the id column and then import the data using COPY. I looked only at the timings for that part, not at building extra indexes which happens later. Indexing will have to be done anyway and it happens completely in the database, so it is unlikely that we can do much about that part. I then dumped out the data in COPY format and re-created the same database by creating the tables and running COPY with psql. The time for this is the shortest time we can likely get, the difference between this time and the import time is the time needed to read and convert the OSM data, i.e. the necessary or unnecessary overhead generated by osm2pgsql.I then tried some variations:
Here are the timings (in minutes):
Some results from this research:
We also have to keep in mind that the situation is different if we use a flat nodes file. (And also different if we use the --extra-attributes option.)
And for real situations we have interaction between the middle and the output which I haven't looked at in detail so far. Most nodes don't have any tags, so they don't take up any time in the output code, the middle code is the bottleneck here. For the ways this situation is reversed, the middle is reasonably simple, the output runs some Lua code for basically every way, which is almost certainly the bottleneck.
The text was updated successfully, but these errors were encountered: