SpringBoot + MinIO + 阿里云 OSS:文件上传下载、分片断点续传全链路方案

文件存储在系统架构中从最初的小文件直接存数据库,到后来的分布式文件系统,再到如今的云原生对象存储,每一步演进都伴随着业务规模的扩大和技术需求的升级。今天,我就来和大家分享一套在生产环境中经过验证的文件存储全链路解决方案: SpringBoot + MinIO + 阿里云OSS ,涵盖文件上传下载、分片上传、断点续传等核心功能,让你的系统在面对大文件处理时游刃有余!

一、文件存储的前世今生

在开始技术实现之前,我们先来聊聊文件存储的发展历程。

1.1 传统文件存储的痛点

// 传统文件存储方式 - 直接存数据库
@Entity
public class FileEntity {
    @Id
    private Long id;
    
    // 文件内容直接存为BLOB
    @Lob
    private byte[] fileContent;
    
    private String fileName;
    private String contentType;
    private Long fileSize;
}

这种方案的问题显而易见:

  • 数据库压力大 :大文件直接存入数据库,拖慢整体性能
  • 扩展性差 :数据库容量有限,难以水平扩展
  • 访问效率低 :每次访问都要经过数据库,无法利用CDN加速

1.2 对象存储的优势

随着业务发展,我们逐渐转向对象存储:

  • 高可用性 :多副本存储,保证数据安全
  • 高扩展性 :支持海量数据存储
  • 成本低廉 :按需付费,无需维护存储服务器
  • 访问加速 :配合CDN实现就近访问

二、技术选型与架构设计

2.1 为什么选择MinIO + 阿里云OSS?

在实际项目中,我们采用了 混合存储策略

┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│   应用系统      │    │    MinIO        │    │   阿里云OSS     │
│ (SpringBoot)    │───▶│  (私有云)       │───▶│  (公有云)       │
└─────────────────┘    └─────────────────┘    └─────────────────┘
                             │                       │
                             └───────────────────────┘
                                    数据同步

MinIO :部署在私有云,提供高速、低成本的对象存储服务** ** 阿里云OSS :作为备份和CDN加速,保证数据安全和访问性能

2.2 核心技术栈

  • SpringBoot 2.7+:提供RESTful API接口
  • MinIO Java SDK :与MinIO进行交互
  • 阿里云OSS SDK :与OSS进行交互
  • Redis :存储上传进度和分片信息
  • MySQL :存储文件元数据

三、分片上传与断点续传实现

3.1 分片上传核心思路

大文件分片上传的基本原理是将大文件切分成多个小块(chunk),然后分别上传,最后在服务器端合并。

@Service
public class ChunkUploadService {
    
    @Autowired
    private MinioClient minioClient;
    
    @Autowired
    private RedisTemplate<String, Object> redisTemplate;
    
    /**
     * 分片上传文件
     */
    public UploadResult uploadChunk(ChunkUploadRequest request) {
        String uploadId = request.getUploadId();
        int chunkIndex = request.getChunkIndex();
        
        // 1. 保存分片到临时位置
        String chunkObjectName = String.format("temp/%s/chunk_%d", uploadId, chunkIndex);
        minioClient.putObject(
            PutObjectArgs.builder()
                .bucket("file-upload-bucket")
                .object(chunkObjectName)
                .stream(request.getChunkData(), request.getChunkSize(), -1)
                .build()
        );
        
        // 2. 记录分片上传状态
        String progressKey = "upload_progress:" + uploadId;
        Set<Integer> uploadedChunks = (Set<Integer>) redisTemplate.opsForSet().members(progressKey);
        if (uploadedChunks == null) {
            uploadedChunks = new HashSet<>();
        }
        uploadedChunks.add(chunkIndex);
        redisTemplate.opsForSet().add(progressKey, chunkIndex);
        
        // 3. 检查是否所有分片都已上传
        if (uploadedChunks.size() == request.getTotalChunks()) {
            // 合并分片
            String finalObjectName = mergeChunks(uploadId, request.getTotalChunks());
            return UploadResult.success(finalObjectName);
        }
        
        return UploadResult.continueUpload();
    }
    
    /**
     * 合并分片
     */
    private String mergeChunks(String uploadId, int totalChunks) {
        try {
            // 获取所有分片对象
            List<ComposeSource> sourceList = new ArrayList<>();
            for (int i = 0; i < totalChunks; i++) {
                String chunkObjectName = String.format("temp/%s/chunk_%d", uploadId, i);
                sourceList.add(ComposeSource.builder()
                    .bucket("file-upload-bucket")
                    .object(chunkObjectName)
                    .build());
            }
            
            // 生成最终文件名
            String finalObjectName = generateFinalObjectName(uploadId);
            
            // 合并分片
            minioClient.composeObject(
                ComposeObjectArgs.builder()
                    .bucket("file-upload-bucket")
                    .object(finalObjectName)
                    .sources(sourceList)
                    .build()
            );
            
            // 清理临时分片
            cleanupTempChunks(uploadId, totalChunks);
            
            return finalObjectName;
        } catch (Exception e) {
            throw new RuntimeException("合并分片失败", e);
        }
    }
}

3.2 断点续传实现

断点续传的关键在于记录上传进度,当上传中断后能够从断点处继续:

@Service
public class ResumeUploadService {
    
    /**
     * 检查上传进度
     */
    public UploadProgress checkProgress(String uploadId, String fileMd5) {
        // 1. 检查是否已存在完整文件
        if (fileExists(fileMd5)) {
            return UploadProgress.completed();
        }
        
        // 2. 从Redis获取已上传的分片
        String progressKey = "upload_progress:" + uploadId;
        Set<Integer> uploadedChunks = (Set<Integer>) redisTemplate.opsForSet().members(progressKey);
        
        if (uploadedChunks == null || uploadedChunks.isEmpty()) {
            // 没有上传记录,从头开始
            return UploadProgress.startFrom(0);
        }
        
        // 3. 返回需要继续上传的分片索引
        int totalChunks = getTotalChunks(uploadId);
        List<Integer> missingChunks = new ArrayList<>();
        for (int i = 0; i < totalChunks; i++) {
            if (!uploadedChunks.contains(i)) {
                missingChunks.add(i);
            }
        }
        
        return UploadProgress.resumeFrom(missingChunks);
    }
    
    /**
     * 验证文件MD5(防止重复上传)
     */
    public boolean validateFileMd5(String fileMd5) {
        // 查询数据库或缓存,检查是否已存在相同MD5的文件
        return !fileMetadataRepository.existsByMd5(fileMd5);
    }
}

3.3 文件元数据管理

@Entity
@Table(name = "file_metadata")
@Data
public class FileMetadata {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;
    
    private String fileMd5;        // 文件MD5,用于去重
    private String fileName;       // 原始文件名
    private String objectName;     // 存储对象名
    private String bucketName;     // 存储桶名
    private Long fileSize;         // 文件大小
    private String contentType;    // MIME类型
    private String storageType;    // 存储类型(MINIO/OSS)
    private String url;            // 访问URL
    private LocalDateTime uploadTime;
    private String uploader;       // 上传者
    private String status;         // 状态(UPLOADING/COMPLETED/FAILED)
    
    @CreationTimestamp
    private LocalDateTime createdAt;
    
    @UpdateTimestamp
    private LocalDateTime updatedAt;
}

四、SpringBoot集成实现

4.1 配置类

@Configuration
public class StorageConfig {
    
    @Value("${minio.endpoint}")
    private String minioEndpoint;
    
    @Value("${minio.access-key}")
    private String minioAccessKey;
    
    @Value("${minio.secret-key}")
    private String minioSecretKey;
    
    @Value("${oss.endpoint}")
    private String ossEndpoint;
    
    @Value("${oss.access-key}")
    private String ossAccessKey;
    
    @Value("${oss.secret-key}")
    private String ossSecretKey;
    
    @Bean
    public MinioClient minioClient() {
        return MinioClient.builder()
            .endpoint(minioEndpoint)
            .credentials(minioAccessKey, minioSecretKey)
            .build();
    }
    
    @Bean
    public OSS ossClient() {
        return new OSSClientBuilder().build(ossEndpoint, ossAccessKey, ossSecretKey);
    }
}

4.2 文件上传控制器

@RestController
@RequestMapping("/api/files")
public class FileUploadController {
    
    @Autowired
    private ChunkUploadService chunkUploadService;
    
    @Autowired
    private ResumeUploadService resumeUploadService;
    
    @Autowired
    private FileDownloadService fileDownloadService;
    
    /**
     * 初始化分片上传
     */
    @PostMapping("/init-chunk-upload")
    public ResponseEntity<InitUploadResponse> initChunkUpload(@RequestBody InitUploadRequest request) {
        String uploadId = UUID.randomUUID().toString();
        String fileMd5 = request.getFileMd5();
        
        // 检查是否已存在相同文件
        if (resumeUploadService.validateFileMd5(fileMd5)) {
            // 文件已存在,直接返回
            return ResponseEntity.ok(InitUploadResponse.existingFile());
        }
        
        // 检查上传进度
        UploadProgress progress = resumeUploadService.checkProgress(uploadId, fileMd5);
        
        return ResponseEntity.ok(InitUploadResponse.builder()
            .uploadId(uploadId)
            .progress(progress)
            .build());
    }
    
    /**
     * 上传分片
     */
    @PostMapping("/upload-chunk")
    public ResponseEntity<UploadResult> uploadChunk(
            @RequestParam("uploadId") String uploadId,
            @RequestParam("chunkIndex") int chunkIndex,
            @RequestParam("chunk") MultipartFile chunk) {
        
        ChunkUploadRequest request = ChunkUploadRequest.builder()
            .uploadId(uploadId)
            .chunkIndex(chunkIndex)
            .chunkData(chunk.getInputStream())
            .chunkSize(chunk.getSize())
            .build();
            
        UploadResult result = chunkUploadService.uploadChunk(request);
        return ResponseEntity.ok(result);
    }
    
    /**
     * 文件下载
     */
    @GetMapping("/download/{fileId}")
    public void downloadFile(@PathVariable Long fileId, HttpServletResponse response) {
        fileDownloadService.downloadFile(fileId, response);
    }
}

4.3 文件下载服务

@Service
public class FileDownloadService {
    
    public void downloadFile(Long fileId, HttpServletResponse response) {
        FileMetadata metadata = fileMetadataRepository.findById(fileId)
            .orElseThrow(() -> new RuntimeException("文件不存在"));
        
        try {
            // 根据存储类型选择下载方式
            if ("MINIO".equals(metadata.getStorageType())) {
                downloadFromMinio(metadata, response);
            } else if ("OSS".equals(metadata.getStorageType())) {
                downloadFromOss(metadata, response);
            }
        } catch (Exception e) {
            throw new RuntimeException("下载失败", e);
        }
    }
    
    private void downloadFromMinio(FileMetadata metadata, HttpServletResponse response) throws Exception {
        GetObjectResponse object = minioClient.getObject(
            GetObjectArgs.builder()
                .bucket(metadata.getBucketName())
                .object(metadata.getObjectName())
                .build()
        );
        
        // 设置响应头
        response.setContentType(metadata.getContentType());
        response.setHeader("Content-Disposition", 
            "attachment; filename=\"" + metadata.getFileName() + "\"");
        response.setContentLengthLong(metadata.getFileSize());
        
        // 写入响应流
        StreamUtils.copy(object, response.getOutputStream());
    }
    
    private void downloadFromOss(FileMetadata metadata, HttpServletResponse response) throws Exception {
        OSSObject ossObject = ossClient.getObject(metadata.getBucketName(), metadata.getObjectName());
        
        // 设置响应头
        response.setContentType(metadata.getContentType());
        response.setHeader("Content-Disposition", 
            "attachment; filename=\"" + metadata.getFileName() + "\"");
        response.setContentLengthLong(metadata.getFileSize());
        
        // 写入响应流
        StreamUtils.copy(ossObject.getObjectContent(), response.getOutputStream());
    }
}

五、性能优化与最佳实践

5.1 并发上传优化

@Service
public class ConcurrentUploadService {
    
    private final ExecutorService executorService = 
        Executors.newFixedThreadPool(10);
    
    /**
     * 并发上传分片
     */
    public CompletableFuture<UploadResult> uploadChunksConcurrent(
            List<ChunkUploadRequest> chunks) {
        
        List<CompletableFuture<UploadResult>> futures = chunks.stream()
            .map(chunk -> CompletableFuture.supplyAsync(() -> {
                return chunkUploadService.uploadChunk(chunk);
            }, executorService))
            .collect(Collectors.toList());
        
        return CompletableFuture.allOf(futures.toArray(new CompletableFuture[0]))
            .thenApply(v -> UploadResult.success("并发上传完成"));
    }
}

5.2 内存优化

对于大文件处理,内存优化至关重要:

// 配置文件上传限制
spring.servlet.multipart:
  max-file-size: 100MB
  max-request-size: 500MB
  file-size-threshold: 1MB  # 超过此大小的文件会写入磁盘

// 自定义文件上传配置
@Configuration
public class FileUploadConfig {
    
    @Bean
    public MultipartConfigElement multipartConfigElement() {
        MultipartConfigFactory factory = new MultipartConfigFactory();
        factory.setMaxFileSize(DataSize.ofMegabytes(100));
        factory.setMaxRequestSize(DataSize.ofMegabytes(500));
        return factory.createMultipartConfig();
    }
}

5.3 缓存策略

@Service
public class FileCacheService {
    
    @Cacheable(value = "file-metadata", key = "#fileMd5")
    public FileMetadata getFileByMd5(String fileMd5) {
        return fileMetadataRepository.findByMd5(fileMd5);
    }
    
    @CacheEvict(value = "file-metadata", key = "#fileMd5")
    public void removeFileCache(String fileMd5) {
        // 清除缓存
    }
}

六、安全与权限控制

6.1 上传权限验证

@RestController
public class SecureFileUploadController {
    
    @PostMapping("/secure-upload")
    public ResponseEntity<?> secureUpload(
            @RequestParam("file") MultipartFile file,
            Authentication authentication) {
        
        // 验证用户权限
        if (!hasUploadPermission(authentication)) {
            return ResponseEntity.status(HttpStatus.FORBIDDEN).build();
        }
        
        // 验证文件类型
        if (!isValidFileType(file.getContentType())) {
            return ResponseEntity.badRequest()
                .body("不支持的文件类型");
        }
        
        // 验证文件大小
        if (file.getSize() > MAX_FILE_SIZE) {
            return ResponseEntity.badRequest()
                .body("文件大小超过限制");
        }
        
        // 验证文件内容(防病毒扫描)
        if (!isFileSafe(file)) {
            return ResponseEntity.badRequest()
                .body("文件包含恶意内容");
        }
        
        // 执行上传
        return ResponseEntity.ok().build();
    }
}

6.2 下载权限控制

@GetMapping("/download/{fileId}")
public ResponseEntity<Resource> downloadFile(
        @PathVariable Long fileId,
        Authentication authentication) {
    
    FileMetadata metadata = fileMetadataRepository.findById(fileId).orElse(null);
    if (metadata == null) {
        return ResponseEntity.notFound().build();
    }
    
    // 验证下载权限
    if (!hasDownloadPermission(authentication, metadata)) {
        return ResponseEntity.status(HttpStatus.FORBIDDEN).build();
    }
    
    // 生成临时访问URL(带过期时间)
    String tempUrl = generateTempUrl(metadata);
    
    return ResponseEntity.ok()
        .header(HttpHeaders.CONTENT_DISPOSITION, 
            "attachment; filename=\"" + metadata.getFileName() + "\"")
        .body(new UrlResource(tempUrl));
}

七、监控与运维

7.1 上传进度监控

@Component
public class UploadMonitor {
    
    private final MeterRegistry meterRegistry;
    
    public void recordUploadProgress(String uploadId, long bytesUploaded) {
        Gauge.builder("file.upload.progress")
            .tag("uploadId", uploadId)
            .register(meterRegistry, () -> bytesUploaded);
    }
    
    public void recordUploadSpeed(String uploadId, double speed) {
        Timer.builder("file.upload.speed")
            .tag("uploadId", uploadId)
            .register(meterRegistry)
            .record((long) speed, TimeUnit.SECONDS);
    }
}

7.2 存储空间监控

@Component
public class StorageMonitor {
    
    @Scheduled(fixedRate = 300000) // 每5分钟执行一次
    public void monitorStorageUsage() {
        try {
            // 获取MinIO存储使用情况
            long minioUsage = getMinioStorageUsage();
            
            // 获取OSS存储使用情况
            long ossUsage = getOssStorageUsage();
            
            // 上报监控指标
            meterRegistry.gauge("storage.usage.minio", minioUsage);
            meterRegistry.gauge("storage.usage.oss", ossUsage);
            
            // 检查存储空间是否充足
            if (minioUsage > STORAGE_WARNING_THRESHOLD) {
                sendAlert("MinIO存储空间不足");
            }
        } catch (Exception e) {
            log.error("监控存储空间失败", e);
        }
    }
}

八、总结

通过SpringBoot + MinIO + 阿里云OSS的组合,我们构建了一套完整、高效的文件存储解决方案:

  1. 高性能 :分片上传和并发处理大幅提升上传效率
  2. 高可用 :多副本存储和混合部署保证数据安全
  3. 高扩展 :支持海量文件存储,易于水平扩展
  4. 低成本 :按需付费,资源利用率高
  5. 易维护 :标准化接口,便于运维管理

这套方案已经在多个生产项目中稳定运行,处理了千万级的文件上传请求。当然,任何技术方案都不是银弹,需要根据具体业务场景进行调整和优化。

希望今天的分享能给大家带来一些启发和帮助,让我们一起在技术的道路上不断前行!


关注「服务端技术精选」,获取更多干货技术文章!


标题:SpringBoot + MinIO + 阿里云 OSS:文件上传下载、分片断点续传全链路方案
作者:jiangyi
地址:http://jiangyi.space/articles/2025/12/29/1766998824367.html

    0 评论
avatar