A quick script provided an answer. I copied a 100K file 100 times for each test and averaged the results (which are in seconds).
Avg. time to make copy between buckets: 0.10705331
Avg. time to make copy within bucket: 0.10522299
A second test produced similar results (very slightly slower in both cases).
And here's the Ruby script I threw together. It uses the aws-sdk gem.
# get buckets
s3 = AWS::S3.new
bucket1 = s3.buckets['dfsbucket1']
bucket2 = s3.buckets['dfsbucket2']
# get an object from bucket 1
random_file = bucket1.objects['191111308/state_file']
start = Time.now
copies = 100
(1 .. copies).each do |i|
random_file.copy_to("test_file#{i}", {
:bucket => bucket2
})
end
puts "Avg. time to make copy between buckets: #{(Time.now - start)/copies}"
start = Time.now
(1..copies).each {|i| random_file.copy_to("test_file#{i}")}
puts "Avg. time to make copy within bucket: #{(Time.now - start)/copies}"
No comments:
Post a Comment