前言

用代码在简单视频编辑中,主要就是加美颜、水印(贴图)、视频截取、视频拼接、音视频的处理,在美颜中,使用GPUImage即可实现多种滤镜、磨皮美颜的功能,并且可以脸部识别实时美颜等功能,这个有很多成熟的处理方案,所以现在主要说后面的水印(贴图)、视频截取、视频拼接、音视频的处理,在文章结尾会给出一个完整的测试demo,该demo可以操作视频之后保存到系统相册,文章主要说明下注意的点。

一、视频加水印

之前为了给视频增加美颜效果,所以主流使用了GPUImage的库,所以在选择添加视频水印的功能时也首先想到了GPUImage的方案。但是使用过程中,发现性能并没有AVFoundation的合成快,但是对视频的兼容性还是GPUImage的比较好,所以如果注重速度,首选AVFoundation的方案。但是如果是考虑对视频的兼容性的话,可以采用GPUImage方案。

1.1、GPUImage方案

GPUImage使用GPUImageUIElementGPUImageMovieWriter重新进行渲染,通过叠加滤镜来重新生成视频。滤镜可以采用GPUImageDissolveBlendFilterGPUImageAlphaBlendFilterGPUImageNormalBlendFilter这个三个滤镜任选一个都可以。

实现代码:

/** 使用GPUImage加载水印 @param vedioPath 视频路径 @param img 水印图片 @param coverImg 水印图片二 @param question 字符串水印 @param fileName 生成之后的视频名字 */ -(void)saveVedioPath:(NSURL*)vedioPath WithWaterImg:(UIImage*)img WithCoverImage:(UIImage*)coverImg WithQustion:(NSString*)question WithFileName:(NSString*)fileName { [SVProgressHUD showWithStatus:@"生成水印视频到系统相册"]; // 滤镜 // filter = [[GPUImageDissolveBlendFilter alloc] init]; // [(GPUImageDissolveBlendFilter *)filter setMix:0.0f]; //也可以使用透明滤镜 // filter = [[GPUImageAlphaBlendFilter alloc] init]; // //mix即为叠加后的透明度,这里就直接写1.0了 // [(GPUImageDissolveBlendFilter *)filter setMix:1.0f]; filter = [[GPUImageNormalBlendFilter alloc] init]; NSURL *sampleURL = vedioPath; AVAsset *asset = [AVAsset assetWithURL:sampleURL]; CGSize size = asset.naturalSize; movieFile = [[GPUImageMovie alloc] initWithAsset:asset]; movieFile.playAtActualSpeed = NO; // 文字水印 UILabel *label = [[UILabel alloc] init]; label.text = question; label.font = [UIFont systemFontOfSize:30]; label.textColor = [UIColor whiteColor]; [label setTextAlignment:NSTextAlignmentCenter]; [label sizeToFit]; label.layer.masksToBounds = YES; label.layer.cornerRadius = 18.0f; [label setBackgroundColor:[UIColor colorWithRed:0 green:0 blue:0 alpha:0.5]]; [label setFrame:CGRectMake(50, 100, label.frame.size.width+20, label.frame.size.height)]; //图片水印 UIImage *coverImage1 = [img copy]; UIImageView *coverImageView1 = [[UIImageView alloc] initWithImage:coverImage1]; [coverImageView1 setFrame:CGRectMake(0, 100, 210, 50)]; //第二个图片水印 UIImage *coverImage2 = [coverImg copy]; UIImageView *coverImageView2 = [[UIImageView alloc] initWithImage:coverImage2]; [coverImageView2 setFrame:CGRectMake(270, 100, 210, 50)]; UIView *subView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, size.width, size.height)]; subView.backgroundColor = [UIColor clearColor]; [subView addSubview:coverImageView1]; [subView addSubview:coverImageView2]; [subView addSubview:label]; GPUImageUIElement *uielement = [[GPUImageUIElement alloc] initWithView:subView]; NSString *pathToMovie = [NSHomeDirectory() stringByAppendingPathComponent:[NSString stringWithFormat:@"Documents/%@.mp4",fileName]]; unlink([pathToMovie UTF8String]); NSURL *movieURL = [NSURL fileURLWithPath:pathToMovie]; movieWriter = [[GPUImageMovieWriter alloc] initWithMovieURL:movieURL size:CGSizeMake(720.0, 1280.0)]; GPUImageFilter* progressFilter = [[GPUImageFilter alloc] init]; [progressFilter addTarget:filter]; [movieFile addTarget:progressFilter]; [uielement addTarget:filter]; movieWriter.shouldPassthroughAudio = YES; // movieFile.playAtActualSpeed = true; if ([[asset tracksWithMediaType:AVMediaTypeAudio] count] > 0){ movieFile.audioEncodingTarget = movieWriter; } else {//no audio movieFile.audioEncodingTarget = nil; } [movieFile enableSynchronizedEncodingUsingMovieWriter:movieWriter]; // 显示到界面 [filter addTarget:movieWriter]; [movieWriter startRecording]; [movieFile startProcessing]; // dlink = [CADisplayLink displayLinkWithTarget:self selector:@selector(updateProgress)]; // [dlink setFrameInterval:15]; // [dlink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; // [dlink setPaused:NO]; __weak typeof(self) weakSelf = self; //渲染 [progressFilter setFrameProcessingCompletionBlock:^(GPUImageOutput *output, CMTime time) { //水印可以移动 CGRect frame = coverImageView1.frame; frame.origin.x += 1; frame.origin.y += 1; coverImageView1.frame = frame; //第5秒之后隐藏coverImageView2 if (time.value/time.timescale>=5.0) { [coverImageView2 removeFromSuperview]; } [uielement update]; }]; //保存相册 [movieWriter setCompletionBlock:^{ dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(0.2 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{ __strong typeof(self) strongSelf = weakSelf; [strongSelf->filter removeTarget:strongSelf->movieWriter]; [strongSelf->movieWriter finishRecording]; __block PHObjectPlaceholder *placeholder; if (UIVideoAtPathIsCompatibleWithSavedPhotosAlbum(pathToMovie)) { NSError *error; [[PHPhotoLibrary sharedPhotoLibrary] performChangesAndWait:^{ PHAssetChangeRequest* createAssetRequest = [PHAssetChangeRequest creationRequestForAssetFromVideoAtFileURL:movieURL]; placeholder = [createAssetRequest placeholderForCreatedAsset]; } error:&error]; if (error) { [SVProgressHUD showErrorWithStatus:[NSString stringWithFormat:@"%@",error]]; } else{ [SVProgressHUD showSuccessWithStatus:@"视频已经保存到相册"]; } } }); }]; }

使用的时候直接调用即可

-(void)useGpuimage{ NSURL *videoPath = [NSURL fileURLWithPath:[[NSBundle mainBundle] pathForResource:@"selfS" ofType:@"MOV"]]; [self saveVedioPath:videoPath WithWaterImg:[UIImage imageNamed:@"avatar.png"] WithCoverImage:[UIImage imageNamed:@"demo.png"] WithQustion:@"文字水印:hudongdongBlog" WithFileName:@"waterVideo"]; }

代码中在progressFilter setFrameProcessingCompletionBlock的回调中,设置了各个元素的显示、移动,这样的话就可以更加自由的设置水印的显示与否,比如水印在刚开始显示,五秒之后消失等功能,这个相当于每一帧都在渲染次,所以如果是采用什么动画效果的话可以直接设置frame即可。

这个相当于把视频先录制一遍,然后边录制边重新添加水印,所以这个兼容性比较好,几乎只要支持的视频格式全都可以处理,但是注意没有声音视频的情况,如果原视频没有声音,在创建新视频采集声音的时候,会出现错误Assertion failure in -[GPUImageMovieWriter createDataFBO],崩溃在这里

NSAssert(status == GL_FRAMEBUFFER_COMPLETE, @"Incomplete filter FBO: %d", status);

所以在采集的时候需要先判断是否有声音来源,然后再加判断

if ([[asset tracksWithMediaType:AVMediaTypeAudio] count] > 0){ movieFile.audioEncodingTarget = movieWriter; } else {//no audio movieFile.audioEncodingTarget = nil; }

总体来说GPUImage只是简单的提供了加载滤镜的变相方案,原理其实就是加了一个滤镜,这个滤镜的纹理是水印的那个图片,然后混合罢了,并没有提供更深层的编辑功能。

1.2、AVFoundation方案

AVFoundation这种使用之后,发现对于同一个视频AVFoundation处理的更快。并且这个是单独对视频轨、音轨等操作,所以操作性更高,玩过Adobe Premiere、会声会影等视频编辑软件的人都知道,在对一个视频进行操作编辑的时候,只需要在相应的轨道上面拖进去相应的资源即可。

这个就是在视频轨编辑图片,但是试过的人肯定发现如果只是单独编辑视频轨的话,出来的视频是没有声音的,网上有很多这个代码,但是都没有声音,复制粘贴的人也不知道解决方案这个是因为视频只采集了视频轨资源,却没有编辑音轨的资源,所以如果编辑裁剪的视频没有声音的话,需要加上音轨的资源。

实现代码:

///使用AVfoundation添加水印 - (void)AVsaveVideoPath:(NSURL*)videoPath WithWaterImg:(UIImage*)img WithCoverImage:(UIImage*)coverImg WithQustion:(NSString*)question WithFileName:(NSString*)fileName { if (!videoPath) { return; } //1 创建AVAsset实例 AVAsset包含了video的所有信息 self.videoUrl输入视频的路径 //封面图片 NSDictionary *opts = [NSDictionary dictionaryWithObject:@(YES) forKey:AVURLAssetPreferPreciseDurationAndTimingKey]; videoAsset = [AVURLAsset URLAssetWithURL:videoPath options:opts]; //初始化视频媒体文件 CMTime startTime = CMTimeMakeWithSeconds(0.2, 600); CMTime endTime = CMTimeMakeWithSeconds(videoAsset.duration.value/videoAsset.duration.timescale-0.2, videoAsset.duration.timescale); //声音采集 AVURLAsset * audioAsset = [[AVURLAsset alloc] initWithURL:videoPath options:opts]; //2 创建AVMutableComposition实例. apple developer 里边的解释 【AVMutableComposition is a mutable subclass of AVComposition you use when you want to create a new composition from existing assets. You can add and remove tracks, and you can add, remove, and scale time ranges.】 AVMutableComposition *mixComposition = [[AVMutableComposition alloc] init]; //3 视频通道 工程文件中的轨道,有音频轨、视频轨等,里面可以插入各种对应的素材 AVMutableCompositionTrack *videoTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid]; //把视频轨道数据加入到可变轨道中 这部分可以做视频裁剪TimeRange [videoTrack insertTimeRange:CMTimeRangeMake(startTime, endTime) ofTrack:[[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] atTime:kCMTimeZero error:nil]; //音频通道 AVMutableCompositionTrack * audioTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid]; //音频采集通道 AVAssetTrack * audioAssetTrack = [[audioAsset tracksWithMediaType:AVMediaTypeAudio] firstObject]; [audioTrack insertTimeRange:CMTimeRangeMake(startTime, endTime) ofTrack:audioAssetTrack atTime:kCMTimeZero error:nil]; //3.1 AVMutableVideoCompositionInstruction 视频轨道中的一个视频,可以缩放、旋转等 AVMutableVideoCompositionInstruction *mainInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction]; mainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, videoTrack.timeRange.duration); // 3.2 AVMutableVideoCompositionLayerInstruction 一个视频轨道,包含了这个轨道上的所有视频素材 AVMutableVideoCompositionLayerInstruction *videolayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoTrack]; AVAssetTrack *videoAssetTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0]; // UIImageOrientation videoAssetOrientation_ = UIImageOrientationUp; BOOL isVideoAssetPortrait_ = NO; CGAffineTransform videoTransform = videoAssetTrack.preferredTransform; if (videoTransform.a == 0 && videoTransform.b == 1.0 && videoTransform.c == -1.0 && videoTransform.d == 0) { // videoAssetOrientation_ = UIImageOrientationRight; isVideoAssetPortrait_ = YES; } if (videoTransform.a == 0 && videoTransform.b == -1.0 && videoTransform.c == 1.0 && videoTransform.d == 0) { // videoAssetOrientation_ = UIImageOrientationLeft; isVideoAssetPortrait_ = YES; } // if (videoTransform.a == 1.0 && videoTransform.b == 0 && videoTransform.c == 0 && videoTransform.d == 1.0) { // videoAssetOrientation_ = UIImageOrientationUp; // } // if (videoTransform.a == -1.0 && videoTransform.b == 0 && videoTransform.c == 0 && videoTransform.d == -1.0) { // videoAssetOrientation_ = UIImageOrientationDown; // } [videolayerInstruction setTransform:videoAssetTrack.preferredTransform atTime:kCMTimeZero]; [videolayerInstruction setOpacity:0.0 atTime:endTime]; // 3.3 - Add instructions mainInstruction.layerInstructions = [NSArray arrayWithObjects:videolayerInstruction,nil]; //AVMutableVideoComposition:管理所有视频轨道,可以决定最终视频的尺寸,裁剪需要在这里进行 AVMutableVideoComposition *mainCompositionInst = [AVMutableVideoComposition videoComposition]; CGSize naturalSize; if(isVideoAssetPortrait_){ naturalSize = CGSizeMake(videoAssetTrack.naturalSize.height, videoAssetTrack.naturalSize.width); } else { naturalSize = videoAssetTrack.naturalSize; } float renderWidth, renderHeight; renderWidth = naturalSize.width; renderHeight = naturalSize.height; mainCompositionInst.renderSize = CGSizeMake(renderWidth, renderHeight); mainCompositionInst.renderSize = CGSizeMake(renderWidth, renderHeight); mainCompositionInst.instructions = [NSArray arrayWithObject:mainInstruction]; mainCompositionInst.frameDuration = CMTimeMake(1, 25); [self applyVideoEffectsToComposition:mainCompositionInst WithWaterImg:img WithCoverImage:coverImg WithQustion:question size:CGSizeMake(renderWidth, renderHeight)]; // 4 - 输出路径 NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0]; NSString *myPathDocs = [documentsDirectory stringByAppendingPathComponent:[NSString stringWithFormat:@"%@.mp4",fileName]]; unlink([myPathDocs UTF8String]); NSURL* videoUrl = [NSURL fileURLWithPath:myPathDocs]; dlink = [CADisplayLink displayLinkWithTarget:self selector:@selector(updateProgress)]; [dlink setFrameInterval:15]; [dlink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; [dlink setPaused:NO]; // 5 - 视频文件输出 exporter = [[AVAssetExportSession alloc] initWithAsset:mixComposition presetName:AVAssetExportPresetHighestQuality]; exporter.outputURL=videoUrl; exporter.outputFileType = AVFileTypeQuickTimeMovie; exporter.shouldOptimizeForNetworkUse = YES; exporter.videoComposition = mainCompositionInst; [exporter exportAsynchronouslyWithCompletionHandler:^{ dispatch_async(dispatch_get_main_queue(), ^{ //这里是输出视频之后的操作,做你想做的 [self exportDidFinish:exporter]; }); }]; } - (void)applyVideoEffectsToComposition:(AVMutableVideoComposition *)composition WithWaterImg:(UIImage*)img WithCoverImage:(UIImage*)coverImg WithQustion:(NSString*)question size:(CGSize)size { UIFont *font = [UIFont systemFontOfSize:30.0]; CATextLayer *subtitle1Text = [[CATextLayer alloc] init]; [subtitle1Text setFontSize:30]; [subtitle1Text setString:question]; [subtitle1Text setAlignmentMode:kCAAlignmentCenter]; [subtitle1Text setForegroundColor:[[UIColor whiteColor] CGColor]]; subtitle1Text.masksToBounds = YES; subtitle1Text.cornerRadius = 23.0f; [subtitle1Text setBackgroundColor:[UIColor colorWithRed:0 green:0 blue:0 alpha:0.5].CGColor]; CGSize textSize = [question sizeWithAttributes:[NSDictionary dictionaryWithObjectsAndKeys:font,NSFontAttributeName, nil]]; [subtitle1Text setFrame:CGRectMake(50, 100, textSize.width+20, textSize.height+10)]; //水印 CALayer *imgLayer = [CALayer layer]; imgLayer.contents = (id)img.CGImage; // imgLayer.bounds = CGRectMake(0, 0, size.width, size.height); imgLayer.bounds = CGRectMake(0, 0, 210, 50); imgLayer.position = CGPointMake(size.width/2.0, size.height/2.0); //第二个水印 CALayer *coverImgLayer = [CALayer layer]; coverImgLayer.contents = (id)coverImg.CGImage; // [coverImgLayer setContentsGravity:@"resizeAspect"]; coverImgLayer.bounds = CGRectMake(50, 200,210, 50); coverImgLayer.position = CGPointMake(size.width/4.0, size.height/4.0); // 2 - The usual overlay CALayer *overlayLayer = [CALayer layer]; [overlayLayer addSublayer:subtitle1Text]; [overlayLayer addSublayer:imgLayer]; overlayLayer.frame = CGRectMake(0, 0, size.width, size.height); [overlayLayer setMasksToBounds:YES]; CALayer *parentLayer = [CALayer layer]; CALayer *videoLayer = [CALayer layer]; parentLayer.frame = CGRectMake(0, 0, size.width, size.height); videoLayer.frame = CGRectMake(0, 0, size.width, size.height); [parentLayer addSublayer:videoLayer]; [parentLayer addSublayer:overlayLayer]; [parentLayer addSublayer:coverImgLayer]; //设置封面 CABasicAnimation *anima = [CABasicAnimation animationWithKeyPath:@"opacity"]; anima.fromValue = [NSNumber numberWithFloat:1.0f]; anima.toValue = [NSNumber numberWithFloat:0.0f]; anima.repeatCount = 0; anima.duration = 5.0f; //5s之后消失 [anima setRemovedOnCompletion:NO]; [anima setFillMode:kCAFillModeForwards]; anima.beginTime = AVCoreAnimationBeginTimeAtZero; [coverImgLayer addAnimation:anima forKey:@"opacityAniamtion"]; composition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer]; } //更新生成进度 - (void)updateProgress { [SVProgressHUD showProgress:exporter.progress status:NSLocalizedString(@"生成中...", nil)]; if (exporter.progress>=1.0) { [dlink setPaused:true]; [dlink invalidate]; // [SVProgressHUD dismiss]; } }

其中主要编辑音轨和视频轨的资源,

[videoTrack insertTimeRange:CMTimeRangeMake(startTime, endTime) ofTrack:[[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] atTime:kCMTimeZero error:nil]; [audioTrack insertTimeRange:CMTimeRangeMake(startTime, endTime) ofTrack:audioAssetTrack atTime:kCMTimeZero error:nil];

在后面添加背景音乐和视频的代码中就主要是调节这两个轨道的资源内容。

float renderWidth, renderHeight;

这两个参数则是控制的输出的渲染大小,在视频裁剪的过程中,控制视频输出大小就是控制的这两个参数。这个在视频裁剪的时候再详细说。

水印图片的编辑则是在- (void)applyVideoEffectsToComposition这个函数中,控制的主要是视频的Layer添加图片和图片动画,从而达到编辑水印的功能。

1418003341221074.png

但是这个加水印的时候其实也是在编辑资源的视频轨和音轨,所以会出现有的资源解析不到视频轨,造成生成的图像是蓝色的,还有的资源解析不到音轨造成生成失败。比如iphone的延迟摄像,是没有音轨的,所以会导致生成失败,这两个问题在视频裁剪里面我做了解决,解析不到视频轨的就是用GPUImage单独的录制一份之后再次解析,而解析不到音轨的就代表视频没有声音,就不添加音轨就可以了。

参考文章

美颜滤镜篇

水印篇


☟☟可点击下方广告支持一下☟☟

最后修改:2017 年 05 月 14 日
请我喝杯可乐,请随意打赏: ☞已打赏列表