当前位置: 首页 > 知识库问答 >
问题:

在Android中合并多个Azure的Cloud块Blob

赵同
2023-03-14

我是Azure Blob云的新手。我基本上想从我的android应用程序上传一个视频文件到Azure cloud,但我不能,因为一旦大小达到32MB,它就会停止,并抛出一个异常作为OutOfMemory。所以我做了一些关于如何解决这个问题的研究,我想出了一个解决方案,将一个文件分解成字节,然后作为多个blob上传。最后,将它们编译成一个blob。但我不知道该怎么做。我试着使用commitBlockList,但我无法获得每个blob的Id。

try {
        // Setup the cloud storage account.
        CloudStorageAccount storageAccount = CloudStorageAccount.parse(storageConnectionString);
        int maxSize = 64 * Constants.MB;

        // Create a blob service client
        CloudBlobClient blobClient = storageAccount.createCloudBlobClient();
        blobClient.getDefaultRequestOptions().setSingleBlobPutThresholdInBytes(maxSize);
        CloudBlobContainer container = blobClient.getContainerReference("testing");
        container.createIfNotExists();
        BlobContainerPermissions containerPermissions = new     BlobContainerPermissions();
        containerPermissions.setPublicAccess(BlobContainerPublicAccessType.CONTAINER);
        container.uploadPermissions(containerPermissions);
        CloudBlockBlob finalFile = container.getBlockBlobReference("1.jpg");
        CloudBlob b = container.getBlockBlobReference("temp");
        String Lease = b.getSnapshotID();
        b.uploadFromFile(URL);
        List<BlockEntry> blockEntryIterator = new ArrayList<>();
        blockEntryIterator.add(new BlockEntry(Lease));
        finalFile.commitBlockList(blockEntryIterator);
    } catch (Throwable t) {

    }

更新~~~

我试图将文件拆分为多个部分,但现在出现了这样一个错误:“指定的blob或block内容无效”。public void splitTest(字符串URL)抛出IOException、URISyntaxException、InvalidKeyException、StorageException{

    new ConversionNotificationSetup().sendNotification(a.getApplicationContext(),"STARTED");
    CloudBlockBlob blob = null;
    List<BlockEntry> blockList = null;
    try{
        // get file reference
        FileInputStream fs = new FileInputStream( URL );
        File sourceFile = new File( URL);

        // set counters
        long fileSize = sourceFile.length();
        int blockSize = 3 * (1024 * 1024); // 256K
        int blockCount = (int)((float)fileSize / (float)blockSize) + 1;
        long bytesLeft = fileSize;
        int blockNumber = 0;
        long bytesRead = 0;

        CloudStorageAccount storageAccount = CloudStorageAccount.parse(storageConnectionString);
        CloudBlobClient blobClient = storageAccount.createCloudBlobClient();
        CloudBlobContainer container = blobClient.getContainerReference("testing");
        String title = "Android_" + getFileNameFromUrl(URL);
        // get ref to the blob we are creating while uploading
        blob = container.getBlockBlobReference(title);
        blob.deleteIfExists();

        // list of all block ids we will be uploading - need it for the commit at the end
        blockList = new ArrayList<BlockEntry>();

        // loop through the file and upload chunks of the file to the blob
        while( bytesLeft > 0 ) {

            blockNumber++;
            // how much to read (only last chunk may be smaller)
            int bytesToRead = 0;
            if ( bytesLeft >= (long)blockSize ) {
                bytesToRead = blockSize;
            } else {
                bytesToRead = (int)bytesLeft;
            }

            // trace out progress
            float pctDone = ((float)blockNumber / (float)blockCount) * (float)100;


            // save block id in array (must be base64)
            String x = "";
            if(blockNumber<=9) {
                traceLine( "blockid: 000" + blockNumber + ". " + String.format("%.0f%%",pctDone) + " done.");
                x = "blockid000" + blockNumber;
            }
            else if(blockNumber>=10 && blockNumber<=99){
                traceLine( "blockid: 00" + blockNumber + ". " + String.format("%.0f%%",pctDone) + " done.");
                x = "blockid00" + blockNumber;
            }
            else if(blockNumber>=100 && blockNumber<=999){
                traceLine( "blockid0: " + blockNumber + ". " + String.format("%.0f%%",pctDone) + " done.");
                x = "blockid0" + blockNumber;
            }
            else if(blockNumber>=1000 && blockNumber<=9999){
                traceLine( "blockid: " + blockNumber + ". " + String.format("%.0f%%",pctDone) + " done.");
                x = "blockid" + blockNumber;
            }
            String blockId = Base64.encodeToString(x.getBytes(),Base64.DEFAULT).replace("\n","").toLowerCase();
            traceLine( "Base 64["+x+"] -> " + blockId);
            BlockEntry block = new BlockEntry(blockId);
            blockList.add(block);

            // upload block chunk to Azure Storage
            blob.uploadBlock( blockId, fs, (long)bytesToRead);

            // increment/decrement counters
            bytesRead += bytesToRead;
            bytesLeft -= bytesToRead;

        }
        fs.close();
        traceLine( "CommitBlockList. BytesUploaded: " + bytesRead);
        blob.commitBlockList(blockList);
        new ConversionNotificationSetup().sendNotification(a.getApplicationContext(),"UPLOAD COMPLETE");
        return;
    }
    catch (StorageException storageException) {
        traceLine("StorageException encountered: ");
        traceLine(storageException.getMessage());
        new ConversionNotificationSetup().sendNotification(a.getApplicationContext(),"FAILED");
        assert blockList != null;
        blob.commitBlockList(blockList);
        return;
    } catch( IOException ex ) {
        traceLine( "IOException: " + ex );
        new ConversionNotificationSetup().sendNotification(a.getApplicationContext(),"FAILED");
        assert blockList != null;
        blob.commitBlockList(blockList);
        return;
    } catch (Exception e) {
        traceLine("Exception encountered: ");
        traceLine(e.getMessage());
        new ConversionNotificationSetup().sendNotification(a.getApplicationContext(),"FAILED");
        assert blockList != null;
        blob.commitBlockList(blockList);
        return;
    }
}

~~~工作更新~~~

对于任何想重复使用此方法打开文件的人

public boolean splitTest(String URL) throws IOException, URISyntaxException, InvalidKeyException, StorageException {
    new ConversionNotificationSetup().sendNotification(a.getApplicationContext(),"STARTED");
    CloudBlockBlob blob = null;
    List<BlockEntry> blockList = null;
    try{
        // get file reference
        FileInputStream fs = new FileInputStream(URL);
        File sourceFile = new File(URL);

        // set counters
        long fileSize = sourceFile.length();
        int blockSize = 512 * 1024; // 256K
        //int blockSize = 1 * (1024 * 1024); // 256K
        int blockCount = (int)((float)fileSize / (float)blockSize) + 1;
        long bytesLeft = fileSize;
        int blockNumber = 0;
        long bytesRead = 0;

        CloudStorageAccount storageAccount = CloudStorageAccount.parse(storageConnectionString);
        CloudBlobClient blobClient = storageAccount.createCloudBlobClient();
        CloudBlobContainer container = blobClient.getContainerReference("testing");
        String title = "Android/Android_" + getFileNameFromUrl(URL).replace("\n","").replace(" ","_").replace("-","").toLowerCase();
        // get ref to the blob we are creating while uploading
        blob = container.getBlockBlobReference(title);
        traceLine("Title of blob -> " + title);

        if(blob.exists())
            blob.deleteIfExists();

        blob.setStreamWriteSizeInBytes(blockSize);
        // list of all block ids we will be uploading - need it for the commit at the end
        blockList = new ArrayList<>();

        // loop through the file and upload chunks of the file to the blob
        while( bytesLeft > 0 ) {
            // how much to read (only last chunk may be smaller)
            int bytesToRead = 0;
            if ( bytesLeft >= (long)blockSize ) {
                bytesToRead = blockSize;
            } else {
                bytesToRead = (int)bytesLeft;
            }

            // trace out progress
            float pctDone = ((float)blockNumber / (float)blockCount) * (float)100;


            // save block id in array (must be base64)
            String x = "";
            if(blockNumber<=9) {
                traceLine( "tempblobid0000" + blockNumber + ". " + String.format("%.0f%%",pctDone) + " done.");
                x = "tempblobid0000" + blockNumber;
            }
            else if(blockNumber>=10 && blockNumber<=99){
                traceLine( "tempblobid000" + blockNumber + ". " + String.format("%.0f%%",pctDone) + " done.");
                x = "tempblobid000" + blockNumber;
            }
            else if(blockNumber>=100 && blockNumber<=999){
                traceLine( "tempblobid00" + blockNumber + ". " + String.format("%.0f%%",pctDone) + " done.");
                x = "tempblobid00" + blockNumber;
            }
            else if(blockNumber>=1000 && blockNumber<=9999){
                traceLine( "tempblobid0" + blockNumber + ". " + String.format("%.0f%%",pctDone) + " done.");
                x = "tempblobid0" + blockNumber;
            }
            else if(blockNumber>=10000 && blockNumber<=99999){
                traceLine( "tempblobid" + blockNumber + ". " + String.format("%.0f%%",pctDone) + " done.");
                x = "tempblobid" + blockNumber;
            }
            String blockId = Base64.encodeToString(x.getBytes(),Base64.NO_WRAP).replace("\n","").toLowerCase();
            traceLine( "Base 64["+ x +"] -> " + blockId);
            BlockEntry block = new BlockEntry(blockId);
            blockList.add(block);
            // upload block chunk to Azure Storage
            blob.uploadBlock( blockId, fs, (long)bytesToRead);
            notification2(a,pctDone);
            //a.update(pctDone);
            // increment/decrement counters
            bytesRead += bytesToRead;
            bytesLeft -= bytesToRead;
            blockNumber++;
        }
        fs.close();
        traceLine( "CommitBlockList. BytesUploaded: " + bytesRead + "\t total bytes -> " + fileSize + "\tBytes Left -> " + bytesLeft);
        blob.commitBlockList(blockList);
        new ConversionNotificationSetup().sendNotification(a.getApplicationContext(),"UPLOAD COMPLETE");
        return true;
    }
    catch (StorageException storageException) {
        traceLine("StorageException encountered: ");
        traceLine(storageException.getMessage());
        traceLine("HTTP Status code -> " + storageException.getHttpStatusCode());
        new ConversionNotificationSetup().sendNotification(a.getApplicationContext(),"FAILED");
        if (blob != null) {
            blob.commitBlockList(blockList);
        }
        return false;
    } catch( IOException ex ) {
        traceLine( "IOException: " + ex );
        new ConversionNotificationSetup().sendNotification(a.getApplicationContext(),"FAILED");
        if (blob != null) {
            blob.commitBlockList(blockList);
        }
        return false;
    } catch (Exception e) {
        traceLine("Exception encountered: ");
        traceLine(e.getMessage());
        new ConversionNotificationSetup().sendNotification(a.getApplicationContext(),"FAILED");
        if (blob != null) {
            blob.commitBlockList(blockList);
        }
        return false;
    }

}

共有1个答案

邵城
2023-03-14

这取决于你使用的是哪种手机。如果你的内存不足32MB,你就只能在一部非常小的手机上使用,或者使用很多其他进程。看看你的手机,看看你有多少可用内存,像Gaurav提到的和我的其他回答提到的,把门槛降低到那个水平。自欺欺人对你想做的事没有帮助。

您要查看的两个设置是blob本身上的singleBlobPutThresholdInBytes和setStreamWriteSizeInBytes。singleBlobPutThresholdInBytes影响开始分块的时间,而setStreamWriteSizeInBytes则影响发生分块时块的大小。put阈值默认为64MB,写入大小默认为4MB。请注意,如果减少写入大小,可能无法获得最大块blob大小,因为块blob被限制为50k块。尝试将singleBlobPutThreshold减少到4MB之前可以处理的内存量——如果小于4MB,那么您就真的有麻烦了,还需要减少流写入大小。

 类似资料:
  • 问题内容: 我想合并两个或多个视频文件(它们可能是两个mp4或两个3gp或任何其他格式)。 问题答案: 您可以使用的最通用的工具是ffmpeg(如上面@Jeremy所述),但是在手机上使用它需要做一些工作。它也是LGPL许可的,其某些编码器(特别是x264)是GPL。 如果您要连接的两个文件都使用类似的编码,并且包含在从MP4派生的文件格式中(例如3GP),那么一个更简单的解决方案是使用纯Java

  • 问题内容: 如果要在Java中将两个列表合并为一个,可以使用。但是,如果我想合并多个列表怎么办? 这有效: 但这似乎并不是最好的解决方案,阅读起来也不是特别好。可悲的是不起作用。对于我来说,多次使用并为所有条目重复创建自己的列表似乎也不理想。那我该怎么办呢? 问题答案: 借助下面的代码中所示的Stream API, Java 8可以轻松实现这一目标。我们基本上已经创建了一个包含所有列表的流,然后,

  • 问题内容: 我有一个要合并的文件数组。这是我尝试过的,但是没有用。 问题答案: 使用IOUtils可以做到这一点。看我的例子: 如果您不能使用IOUtils lib,请编写自己的实现。例:

  • 如何将一个数据帧中的多列(比如3列)组合成一个列(在一个新的数据帧中),其中每一行都成为一个Spark DenseVector?类似于这个线程,但在Java中,有一些下面提到的调整。 我试着用这样的UDF: 然后注册UDF: 其中<code>数据类型<code>为: 当我在一个有3列的数据帧上调用这个UDF并打印出新数据帧的模式时,我得到如下结果: 这里的问题是,我需要一个向量在外部,而不是在结构

  • 问题内容: 我从两个不同的来源使用了一些JSON,最后得到两个s,我想将它们组合为一个。 数据: 使用http://json.org/java/库的代码: 因此,在这种情况下,我想将和组合在一起,以制作一个全新的产品或彼此结合。除了将它们拉开并分别加s 之外,还有其他想法吗? 问题答案: 如果要使用两个键Object1和Object2创建新对象,则可以执行以下操作: 如果要合并它们,例如顶级对象有

  • 我有索引,其中每个文档都有这样的结构: 我需要计算每个演员对应的电影数量(演员可以在actor_1_name、actor_2_name或actor_3_name字段中) 这3个字段的映射是: 有没有一种方法,我可以聚合的结果,可以结合所有3个演员领域的条款,并给出一个单一的聚合。 目前,我正在为每个actor字段创建单独的聚合,并通过我的JAVA代码将这些不同的聚合合并成一个。 通过创建不同的聚合