Changes between Version 12 and Version 13 of Performance


Ignore:
Timestamp:
Aug 7, 2013, 3:35:50 PM (11 years ago)
Author:
Dimitar Misev
Comment:

Legend:

Unmodified
Added
Removed
Modified
  • Performance

    v12 v13  
    3131== PostgreSQL ==
    3232The default PostgreSQL configuration can be tuned for better performance with rasdaman. The parameters below should be adapted in the `postgresql.conf`, typically found under `/etc/postgresql`.
    33  * ''max_connections'' - as long as a single user is using the database, this can be decreased to about 40
     33 * ''max_connections'' - as long as a single user is using the database, this can be decreased to about 40. When many rasservers are started (e.g. 20+) and concurrent connections are expected this values should be increased though, or it may lead to random query failures as the threshold is exceeded (see comment:40:ticket:133)
    3434 * ''shared_buffers'' - should be 25%-33% of your total RAM
    3535 * ''work_mem'' - `(total RAM / connections) / 4..16`, but not lower then 128 MB. When you have 4 GB RAM and you are alone then 256 MB is fine.
     
    174174}}}
    175175
     176To make most optimal use of the tile cache feature, it's important to understand how it works. The tile cache especially helps with regular tiling, when there are many partial updates at slices along some axis. The [wiki:Tiling#Caveat regular tiling] sections provides an introduction into this, here we continue with the same example and define a ''best practice'' for using the tile cache.
     177
     178As we has been demonstrated by the regular tiling example, inserting a 1000x1000x1 data cube results in generating a 1000x1000x1000 cube, due to the specific tiling scheme used. If the tile cache is enabled and at least 1000x1000x1000 = 1GB of memory is allowed with `--cachelimit`, this cube will fit in memory and will be cached. If less memory is allowed for the tile cache, then some of the 100x100x1000 tiles that don't fit in the memory will have to be written to disk. Assuming that the whole cube fits in main memory, then any slices updating the cube along the third dimension will be very fast, e.g.
     179{{{
     180update test as m set m[*:*,*:*,1] assign marray x in [0:999,0:999] values 1c
     181update test as m set m[*:*,*:*,2] assign marray x in [0:999,0:999] values 1c
     182...
     183update test as m set m[*:*,*:*,999] assign marray x in [0:999,0:999] values 1c
     184}}}
     185Before making an update at slice 1000, it is best to flush the current cube to disk with
     186{{{
     187rasql -q 'commit'
     188}}}
     189Because `update test as m set m[*:*,*:*,1000] ..` will initiate creation of a completely new cube of tiles, namely `[0:99,0:99,1000:1999]`, `[0:99,99:199,1000:1999]`, etc.
     190
     191Therefore, it is best to group updates by such tile cubes, manually ''commit'' when data in one tile cube is completely ingested, and avoid jumping from update in one cube to update in another cube (e.g. slice 1, then 1000, then 2, then 1001, etc with respect to the example above).
     192
    176193== Important limitation ==
    177194