2014-01-16 41 views
8

有没有办法跟踪文件上传到Azure存储容器的进度?
我正在尝试使用C#将数据上传到Azure的控制台应用程序。
我现在coode样子:如何跟踪异步文件上传到Azure存储的进度

using System; 
using System.Collections.Generic; 
using System.Linq; 
using System.Text; 
using Microsoft.WindowsAzure.Storage; 
using Microsoft.WindowsAzure.Storage.Auth; 
using Microsoft.WindowsAzure.Storage.Blob; 
using System.Configuration; 
using System.IO; 
using System.Threading; 

namespace AdoAzure 
{ 
    class Program 
    { 
     static void Main(string[] args) 
     { 
      CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
      ConfigurationManager.ConnectionStrings["StorageConnectionString"].ConnectionString); 
      CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 
      CloudBlobContainer container = blobClient.GetContainerReference("adokontajnerneki"); 
      container.CreateIfNotExists(); 
      CloudBlobClient myBlobClient = storageAccount.CreateCloudBlobClient(); 
      CloudBlockBlob myBlob = container.GetBlockBlobReference("racuni.adt"); 
      CancellationToken ca = new CancellationToken(); 
      var ado = myBlob.UploadFromFileAsync(@"c:\bo\racuni.adt", FileMode.Open, ca); 
      Console.WriteLine(ado.Status); //Does Not Help Much 
      ado.ContinueWith(t => 
      { 
       Console.WriteLine("It is over"); //this is working OK 
      }); 
      Console.WriteLine(ado.Status); //Does Not Help Much 
      Console.WriteLine("theEnd"); 
      Console.ReadKey(); 
     } 
    } 
} 

这个代码PICE运作良好,但我会喜欢有某种进度条,这样用户就可以看到有这样做的任务。 WindowsAzure.Storage.Blob命名空间中是否有内置的东西,所以我可以像帽子一样使用兔子?

回答

13

我不认为这是可能的,因为上传文件是一个单一的任务,即使内部文件被拆分成多个块并且这些块被上传,代码实际上等待整个任务完成。

一种可能性是手动将文件拆分成块并使用PutBlockAsync方法异步上载这些块。一旦所有的块都上传完毕,你可以调用PutBlockListAsync方法来提交blob。请参考下面这实现了代码:

using Microsoft.WindowsAzure.Storage; 
using Microsoft.WindowsAzure.Storage.Auth; 
using Microsoft.WindowsAzure.Storage.Blob; 
using System; 
using System.Collections.Generic; 
using System.IO; 
using System.Linq; 
using System.Text; 
using System.Threading; 
using System.Threading.Tasks; 

namespace ConsoleApplication1 
{ 
    class Program 
    { 
     static CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentials("accountname", "accountkey"), true); 
     static void Main(string[] args) 
     { 
      CloudBlobClient myBlobClient = storageAccount.CreateCloudBlobClient(); 
      myBlobClient.SingleBlobUploadThresholdInBytes = 1024 * 1024; 
      CloudBlobContainer container = myBlobClient.GetContainerReference("adokontajnerneki"); 
      //container.CreateIfNotExists(); 
      CloudBlockBlob myBlob = container.GetBlockBlobReference("cfx.zip"); 
      var blockSize = 256 * 1024; 
      myBlob.StreamWriteSizeInBytes = blockSize; 
      var fileName = @"D:\cfx.zip"; 
      long bytesToUpload = (new FileInfo(fileName)).Length; 
      long fileSize = bytesToUpload; 

      if (bytesToUpload < blockSize) 
      { 
       CancellationToken ca = new CancellationToken(); 
       var ado = myBlob.UploadFromFileAsync(fileName, FileMode.Open, ca); 
       Console.WriteLine(ado.Status); //Does Not Help Much 
       ado.ContinueWith(t => 
       { 
        Console.WriteLine("Status = " + t.Status); 
        Console.WriteLine("It is over"); //this is working OK 
       }); 
      } 
      else 
      { 
       List<string> blockIds = new List<string>(); 
       int index = 1; 
       long startPosition = 0; 
       long bytesUploaded = 0; 
       do 
       { 
        var bytesToRead = Math.Min(blockSize, bytesToUpload); 
        var blobContents = new byte[bytesToRead]; 
        using (FileStream fs = new FileStream(fileName, FileMode.Open)) 
        { 
         fs.Position = startPosition; 
         fs.Read(blobContents, 0, (int)bytesToRead); 
        } 
        ManualResetEvent mre = new ManualResetEvent(false); 
        var blockId = Convert.ToBase64String(Encoding.UTF8.GetBytes(index.ToString("d6"))); 
        Console.WriteLine("Now uploading block # " + index.ToString("d6")); 
        blockIds.Add(blockId); 
        var ado = myBlob.PutBlockAsync(blockId, new MemoryStream(blobContents), null); 
        ado.ContinueWith(t => 
        { 
         bytesUploaded += bytesToRead; 
         bytesToUpload -= bytesToRead; 
         startPosition += bytesToRead; 
         index++; 
         double percentComplete = (double)bytesUploaded/(double)fileSize; 
         Console.WriteLine("Percent complete = " + percentComplete.ToString("P")); 
         mre.Set(); 
        }); 
        mre.WaitOne(); 
       } 
       while (bytesToUpload > 0); 
       Console.WriteLine("Now committing block list"); 
       var pbl = myBlob.PutBlockListAsync(blockIds); 
       pbl.ContinueWith(t => 
       { 
        Console.WriteLine("Blob uploaded completely."); 
       }); 
      } 
      Console.ReadKey(); 
     } 
    } 
} 
+0

+100非常感谢这是优雅清晰 – adopilot

+0

向导!非常感谢。经过数周的研究,这对我解决问题中的最后一块非常有帮助。 – spoof3r

+0

这里只是注意到你不应该在循环内多次打开一个文件流 - 文件流应该在循环之外打开一次。 – caesay

4

拉夫的解决方案运作良好,是非常相似的http://blogs.msdn.com/b/kwill/archive/2011/05/30/asynchronous-parallel-block-blob-transfers-with-progress-change-notification.aspx。这段代码带来的挑战是,你正在做很多复杂的工作,只需很少的错误处理。我并不是说Gaurav的代码有什么问题 - 它看起来很稳定 - 但特别是与网络相关的通信代码,有很多变量和很多问题需要考虑。

因此,我修改了我的原始博客,以使用存储客户端库中的上传代码(假设来自Azure存储团队的代码比我可以编写的任何内容更健壮),并使用ProgressStream跟踪进度类。您可以在http://blogs.msdn.com/b/kwill/archive/2013/03/06/asynchronous-parallel-block-blob-transfers-with-progress-change-notification-2-0.aspx处看到更新的代码。

+0

+1 @kwill。我同意我们必须处理上传过程失败的情况,这在上面的代码中没有考虑到。代码也一次上传一个块,但是在生产代码中,可能需要并行上传多个块。 –

+0

你应该把这个页面的例子放在这里,而不只是一个链接。 @GauravMantri答案很简短,很好。那个链接是可怕的。 –

相关问题