I am trying to back up a Dynamo DD table in S3. Since this is being done for export purposes on the AWS console and for some reason that the table is not large, I am trying to use it as a bite-based script. Here is the main block of my script: boto.dynamodb2 import boto.dynamodb2.table import table c_ddb2 = boto.dynamodb2.connect_to_region (...) Import from table = table ("myTable") boto.dynamodb2 connection = C_ddb2) # Join the S3 scanres = table.scan () for items in scanners: # Process and store the next item
I get the following exception:
Tracebacks (Lastest Call Last): File "/home/.../ddb2s3.py", line 155, & lt; Module & gt; Main () file "/home/.../bdb2s3.py", line 124, main for this in scanners: file "/usr/local/lib/python2.7/dist-packages/boto/dynamodb2/results PE" , Line 62, "/usr/local/lib/python2.7/dist-packages/boto/dynamodb2/results.py" in the next self.fetch_more () file, line 144, fetch_more result = self.the_callable (* Args , ** kwargs) file "/usr/local/lib/python2.7/dist-packages/boto/dynamodb2/table.py", in line 1213, in the _scan ** kwargs file "/ usr / local / lib / python2. 7 / dist-packages / boto / dynamodb2 / layer1.py ", line 1712, in the scan body = json.dumps (params) file" /usr/local/lib/python2.7/dist-packages/boto/Dynamodb2/ Layer1.py ", line 2100, make_request retry_handler = self._retry_handler) in the file" /usr/local/lib/python2.7/dist-packages/boto/connection.py ", line 932, _mexe position = retry_handler (response, i, next_sleep) file" /usr/local/lib/python2.7/ Dist-packages / boto / dynamodb2 / layer1.py ", line 2134, in the _retry_handler response. Thats, reaction. Erosion, data) bite. D. Ynamodb2.exceptions.ProvisionedThroughoutExceededException: ProvisionedToothUptagged Expired: 400 Bad Request {UMSSE ': U' was transfixed through the provision configured for the table. Consider upgrading your provision level with the UpdateTable API, U '__Type: u'com.amazonaws.dynamodb.v20120810 # Provisioned Throughout Exceptions'}
read the provision made Gone Throughput is set to 1000 should be enough. Written provision T / P was set for a lower price when I ran the script and got an exception and I did not want to adjust it because it sometimes interferes with the batch writes on the table, but I need to touch it Why would it be?
Why am I getting this error? The monitoring of the AWS console for MyTable
is very low because the provision is below 1,000. What am i doing
If you have a well distributed hash, it all works well. But if your hash key is not well distributed, then it can be read all or most of the time. For example, if you have 10 divisions and have a capacity of 1000 readable capacity on the table, for example, each partition has a reading capability of 100. If all your reading is killing a partition, then instead of 100 reading units you have 1000.
Unfortunately, the only way to really fix this problem is to get a good start and re-write the table with those handles.
No comments:
Post a Comment