I've encountered "Out of memory" error when I analyzed about 10,000 metagenome reads against 'nr-20140917-cablastx' database using cablastp-xsearch.
(The program wan run on Ubuntu server with 128 GB memory and 16 cores.)
And, it seems that the memory error could be avoided by using query compression (by setting --compress-query="true" in command line).
Should I use query compression to analyze a large number of metagenome reads by cablastp-xsearch?