View Issue Details

IDProjectCategoryView StatusLast Update
0002640Slicer4Core: Base Codepublic2013-07-08 09:38
Reporteralmar Assigned Topieper  
PrioritynormalSeverityfeatureReproducibilityN/A
Status assignedResolutionopen 
Product VersionSlicer 4.1.1 
Target VersionFixed in Version 
Summary0002640: Feature request: Memory compression
Description

To allow for loading numerous volumes in Slicer I think it would be a good idea to implement an optional zram like caching + compression method.(http://code.google.com/p/compcache/)

Since the nii.gz format is often able to reduce the footprint of images considerably (in my database often by 90%) this would lead to a more efficient memory management and in memory contrained (which is in my epxerience everything below 8gb of ram) systems to a speedup.

TagsNo tags attached.

Activities

pieper

pieper

2012-10-15 05:01

administrator   ~0006525

Interesting idea - agreed that we are not particularly efficient in terms of memory use inside slicer.

There are a few considerations:

  • vtk and itk filters have a built-in assumption of direct pointer access to a contiguous block of memory for images, so anything that gets compressed would have to be hidden from them. For example we could make vtkMRMLVolumeNode::GetImageData() trigger a decompression and return a new uncompressed copy. We'd then need to observe modifications to it and reflect them back to the compressed copy.

  • there is a definite time/space tradeoff here. We used to compress all files going to/from command line modules, but realized this was very time consuming and served no purpose in that scenario; this is just to point out that compression is not free.

  • I like the idea of the zram linux system you pointed to. Doing the compression at the system level would avoid some of the memory access issues and would put the time/space decisions into the user's hands.

almar

almar

2012-10-15 05:57

reporter   ~0006527

Yes, actually I have already tried it on a older C2D 4gb machine. A similar calculation on my 8gb machine would take up around 6 gb of ram. Using zRam I saw a speedup of around 50% because of the reduced/non existent swapping.

The implementation uses a very light form of compression. Since many of the datasets include relatively lots of 0000's the break-even point between compression/cpu load is attractive. But ofcourse you are right. Also on the conclusion that a system level implementation would be better (and as a module faster). But windows/mac do not allow for this kind of memory management easily.

As you have mentioned you could write a seperate routine for it in get/set image data to make it transparent.

Issue History

Date Modified Username Field Change
2012-10-14 06:53 almar New Issue
2012-10-14 06:53 almar Status new => assigned
2012-10-14 06:53 almar Assigned To => pieper
2012-10-15 05:01 pieper Note Added: 0006525
2012-10-15 05:57 almar Note Added: 0006527
2012-12-08 10:12 jcfr Target Version => Slicer 4.3.0
2013-07-08 09:38 pieper Severity tweak => feature
2013-07-08 09:38 pieper Target Version Slicer 4.3.0 =>