expdp liu/liu directory=dump tables=estimate_sts query=estimate_sts:'"where id>10000"' estimate=statistics
expdp liu/liu directory=dump tables=estimate_blk query=estimate_blk:'"where t1>2000"' estimate=statistics
BLOCKS: The estimate is calculated by multiplying the number of database blocks used by the target objects with the appropriate block sizes.
STATISTICS: The estimate is calculated using statistics for each table. For this method to be as accurate as possible, all tables should have been analyzed recently.
Consider the following when ESTIMATE is used:
* The block method of estimate is the least accurate. We investigate how many blocks were allocated for the storage segments for the tables specified. If the table had been recently analyzed, the estimate=statistics would be the most accurate. So remember, it is important to generate statistics before calculating the estimate using estimate=statistics method. Additionally, ESTIMATE=BLOCKS method could be inaccurate when:
a) The table was created with a much bigger initial extent size than was needed for the actual table data
b) Many rows have been deleted from the table, or a very small percentage of each block is used.
* When statistics have been explicitly set via the DBMS_STATS package in order to force certain optimizer plans, instead of being directly collected from the data, ESTIMATE=STATISTICS may not report correct results.
* A table mode export will also export metadata. It exports the metadata for the table, any grants, indexes, and other objects. Perform the same export with CONTENT=DATA_ONLY or CONTENT=METADATA_ONLY to see how much of the dump file is for metadata. In other words, the estimate would only help in getting the size of the table, not the dumpfile size. So, if the export command does not explicitly set CONTENT=DATA_ONLY the dump file size will not match the estimated size.
* The estimated size also uses 1024 bytes per KB, 1048576 bytes per MB and 1073741824 bytes per GB. This may seem close to just using 1000, 1000000 and 1000000000, but as the dump file grows, the estimated value can be off from the actual size. So always compare the exported dump file size by 1024 bytes per KB, 1048576 bytes per MB and 1073741824 bytes per GB.
* The master table is not part of the estimated table data being exported. However, it is a part of dumpfile.
* If a table involves LOBs, the dump file size may vary as ESTIMATE does not take LOB size into consideration. See more complete details below in a simple test performed with LOBs.
通过实验发现 estimate=blocks or estimate=statistics时 估计的结果与实际的dump file size 并不吻合，这是因为实际dumpfile包括metadata ,例如index 等，所以estimate的大小并不能完全衡量dumpfile的大小。
Amazing information! I have been searching for something similar to this for a long time now. With thanks!