实操丨米尔MYD-YT507H开发板基于Fluter+Django+OpenCV的行车记录仪

本篇测评由电子工程世界的优秀测评者“honestqiao ”提供。
此次的板卡测试,是米尔myd-yt507h开发板的行车记录仪测试体验。
之前分享的文章中,在米尔myd-yt507h开发板上进行了摄像头流媒体的尝试,在此基础上,进一步对之前的评测计划进行了实现。经过充分的学习,最终应用fluter+django+opencv,实现了一款米尔行车记录仪,现将实现的具体内容,与大家分享。目录:
行车记录仪业务逻辑规划硬件设备准备摄像头信息记录和实时画面播放服务开发摄像头视频信息记录摄像头服务的完整代码历史数据restful服务开发flutter web界面开发整体运行效果车试实际代码使用感谢总结一、行车记录仪业务逻辑规划经过详细的分析,规划了如下的基本业务逻辑结构:整体分为三个部分:
记录服务:用于记录摄像头拍摄的视频信息,以及提供摄像头当前画面的实时播放服务django服务:包括restful提供api接口获取历史数据信息,以及为flutter的web界面提供访问服务flutter web界面,用于实时画面播放、历史记录播放的界面为了又快又好的开发行车记录仪的实际界面,以及后续进行各移动平台的app开发,选择了flutter。事实证明,坑太多了。不过,跨平台特性,确实好。
二、硬件设备准备:开发这款行车记录仪,实际使用到的硬件设备如下:
主控板:米尔myd-yt507h开发板摄像头:海康威视ds-e11 720p usb摄像头存储卡:闪迪32gb高速microsd存储卡路由器:云来宝盒无线路由器路由器没有拍照,用普通无线路由器即可,当然带宽越高越好。开发板上有两个usb3.0接口,选一个接上路由器即可。然后,将开发板使用网线连接到路由器,再上电,就可以进行实际的操作了。我这边实际使用中,电源接口有点松,容易突然断电,所以使用胶带进行了加固。三、摄像头实时画面播放服务开发在之前尝试mjpeg视频流直播的时候,使用了mjpeg_streamer,但不清楚如何进行视频的分割。因为行车记录仪,一般都是按照一定的时间进行视频的分割存放,避免单个视频过大。经过仔细的学习了解,opencv也可以获取摄像头的信息,并按照需要写入文件。最后,采用了python+opencv的方案,有python负责具体的逻辑,python-opencv负责摄像头视频数据的采集。视频采集部分,包含的具体功能为:
能够采集摄像头的数据能够提供实时视频查看能够按时间写入视频数据到文件,自动进行分割采集摄像头的数据,python-opencv搞定。写入视频数据到文件,python简单搞定。提供实时视频预览,这个花了不少功夫。因为同时要写入到文件,还要提供预览,数据需要复用。经过学习了解,可以将python-opencv采集的画面,按帧在http以jpeg数据发送,那么播放端,就能收到mjpeg数据流,进行播放了。因此,第一版,参考资料,实现了一个python版的mjpeg播放服务,读取帧,写入临时文件,然后从临时文件读取数据返回。为了提高效率,还进行了优化,不写入临时文件,直接在内存中进行转换。最终形成的代码如下:
                                                      # http服务器请求处理:网页、mjpeg数据流class camhandler(basehttprequesthandler): def do_get(self): # mjpeg推流 if self.path.endswith('.mjpg'): self.send_response(200) self.send_header('content-type','multipart/x-mixed-replace; boundary=--jpgboundary') self.end_headers() while true: if is_stop: break try: # rc,img = cameracapture.read() rc,img = success,frame if not rc: continue if true: imgrgb=cv2.cvtcolor(img,cv2.color_bgr2rgb) jpg = image.fromarray(imgrgb) tmpfile = bytesio() jpg.save(tmpfile,'jpeg') self.wfile.write(b--jpgboundary) self.send_header(b'content-type','image/jpeg') self.send_header(b'content-length',str(tmpfile.getbuffer().nbytes)) self.end_headers() jpg.save(self.wfile,'jpeg') else: img_fps = jpeg_quality_value img_param = [int(cv2.imwrite_jpeg_quality), img_fps] img_str = cv2.imencode('.jpg', img, img_param)[1].tobytes() # change image to jpeg format self.send_header('content-type','image/jpeg') self.end_headers() self.wfile.write(img_str) self.wfile.write(b--jpgboundary) # end of this part time.sleep(0.033) except keyboardinterrupt: self.wfile.write(b--jpgboundary--) break except brokenpipeerror: continue return # 网页 if self.path == '/' or self.path.endswith('.html'): self.send_response(200) self.send_header('content-type','text/html') self.end_headers() self.wfile.write(b'live video') self.wfile.write(('' % self.headers.get('host')).encode()) self.wfile.write(b'') return这段代码,提供了两个功能:
如果通过浏览器访问http://ip:端口/index.html,就会返回包含mjpeg调用地址的网页如果通过浏览器访问http://ip:端口/live.mjpg,就会返回mjpeg流媒体数据,以便播放在开发过程中,运行该服务后,随时可以通过浏览器查看效果。其中涉及到opencv相关的知识,以及webserver相关的知识,大家可以了解相关的资料做基础,这里就不详细说了。本来以为提供了mjpeg服务,就能够在flutter开发的web界面中调用了。然而,实际使用时,发现坑来了。flutter的公共库里面,有mjpeg的库,但是在目前的版本中,已经不能使用了。且官方认为用的人不多,在可预见的将来,不会修复。悲催啊!!!条条大道通罗马,此处不通开新路。经过再次的学习了解,flutter的video功能,支持stream模式,其可以采用websocket的方式来获取数据,然后进行播放。那么,只要能够在服务端,将获取的帧数据,使用websocket提供,就能够正常播放了。最终,使用python开发了能够提供实时视频数据的websocket服务,具体代码如下:
                          # websocket服务请求处理async def camtransmithandler(websocket, path): print(client connected !) try : while true: # rc,img = cameracapture.read() rc,img = success,frame if not rc: continue img_fps = jpeg_quality_value img_param = [int(cv2.imwrite_jpeg_quality), img_fps] encoded = cv2.imencode('.jpg', img, img_param)[1] data = str(base64.b64encode(encoded)) data = data[2:len(data)-1] await websocket.send(data) # cv2.imshow(transimission, frame) # if cv2.waitkey(1) & 0xff == ord('q'): # break # cap.release() except exception_connection_close as e: print(client disconnected !) # cap.release() except: print(someting went wrong !)这个部分比之前的更简单,就是简单的转换数据,喂数据给websocket即可。上述的两部分代码中,都没有包含完整的逻辑处理过程,只有关键代码部分。各部分分别讲完以后,将提供完整的代码以供学习。到这里,实时流媒体功能就实现了。
四、摄像头视频信息记录实际上,上一步的实时视频功能,也依赖于这一步,因为其需要共享实际获取的摄像头信息。其基本逻辑也比较简单,步骤如下:
初始化opencv,开始摄像头数据帧的获取检测是否达到预定时间未达到时间,则继续写入当前视频达到时间了,则关闭当前视频,写入缩略图,并开启新的文件写入具体代码如下:
             # 捕获摄像头cameracapture = cv2.videocapture(camera_no)# 摄像头参数设置cameracapture.set(cv2.cap_prop_frame_width, 320)cameracapture.set(cv2.cap_prop_frame_width, 240)cameracapture.set(cv2.cap_prop_saturation, 135)fps = 30size=(int(cameracapture.get(cv2.cap_prop_frame_width)),int(cameracapture.get(cv2.cap_prop_frame_height)))# 读取捕获的数据success,frame = cameracapture.read()...
                                   while true: if is_stop: success = false break; success,frame = cameracapture.read() if not success: continue time_now = get_current_time() if time_now[time] - time_record[time] >= rotate_time: if time_record_prev: thubm_file = get_file_name(time_record_prev, 'thumbs', 'jpg') print([info] write to thumb: %s % thubm_file) if not os.path.isfile(thubm_file): cv2.imwrite(thubm_file, frame) time_record = time_now time_record_prev = get_current_time() video_file = get_file_name(time_record_prev, 'videos', media_ext) print([info] write to video: %s % video_file) # encode = cv2.videowriter_fourcc(*mp4v) encode = cv2.videowriter_fourcc(*'x264') # encode = cv2.videowriter_fourcc(*'avc1') # encode = cv2.videowriter_fourcc(*'xvid') # encode = cv2.videowriter_fourcc(*'h264') videowriter=cv2.videowriter(video_file, encode,fps,size) # mp4 numframeremaining = rotate_time * fps #摄像头捕获持续时间 while success and numframeremaining > 0: videowriter.write(frame) success,frame = cameracapture.read() numframeremaining -= 1cameracapture.release()上述代码的逻辑其实很清晰,有opencv的基础,一看就懂。有一个关键点需要注意的就是 encode = cv2.videowriter_fourcc(*'x264'),在不同的环境下面,提供的编码方式不完全相同。在米尔myd-yt507h开发板的ubuntu环境中,可以使用x264编码。上述代码,会持续不断的读取摄像头的数据帧,存放到frame变量中,然后写入到视频文件中。并进行时间判断,以确定是否需要写入到新的视频文件中。frame变量,在之前实时视频服务中,也会使用,相当于是共享了。
五、摄像头服务的完整代码经过上面的两个部分,就完成了摄像头部分的服务代码。整体的代码如下:
                                                                                                                                                                                                                                                                                                                          # -*- coding: utf-8 -*-import signalimport cv2import timefrom pil import imagefrom threading import threadfrom http.server import basehttprequesthandler,httpserverfrom socketserver import threadingmixinfrom io import bytesioimport osimport sysimport websocketsimport asyncioimport base64import ctypesimport inspectcamera_no = 2rotate_time = 120mjpeg_enable = 1websocket_enable = 1mjpeg_server_port = 28888websocket_port = 28889jpeg_quality_value = 65store_dir = ./data/ if os.uname()[0] == 'darwin' else /sdcard/data/media_ext = mkvexception_connection_close = websockets.exceptions.connectionclosed if sys.version[:3] == '3.6' else websockets.connectioncloseddef _async_raise(tid, exctype): raises the exception, performs cleanup if needed try: tid = ctypes.c_long(tid) if not inspect.isclass(exctype): exctype = type(exctype) res = ctypes.pythonapi.pythreadstate_setasyncexc(tid, ctypes.py_object(exctype)) if res == 0: # pass raise valueerror(invalid thread id) elif res != 1: # if it returns a number greater than one, you're in trouble, # and you should call it again with exc=null to revert the effect ctypes.pythonapi.pythreadstate_setasyncexc(tid, none) raise systemerror(pythreadstate_setasyncexc failed) except exception as err: print(err)def stop_thread(thread): 终止线程 _async_raise(thread.ident, systemexit)# 信号处理回调def signal_handler(signum, frame): # global cameracapture # global thread # global server # global is_stop # global success print('signal_handler: caught signal ' + str(signum)) if signum == signal.sigint.value: print('stop server:') is_stop = true success = false print(mjpeg server.socket.close...) server.socket.close() print(mjpeg server.shutdown...) server.shutdown() print(ws server.socket.close...) server_ws.ws_server.close() time.sleep(1) # print(ws server.shutdown...) # await server_ws.ws_server.wait_closed() print(mjpeg thread.shutdown...) thread_mjpeg.join() print(ws loop.shutdown...) # event_loop_ws.stop() event_loop_ws.call_soon_threadsafe(event_loop_ws.stop) time.sleep(1) # print(ws thread.shutdown...) # stop_thread(thread_ws) # time.sleep(1) # print(server) # print(server_ws) print(thread_mjpeg.is_alive()) print(thread_ws.is_alive()) print(event_loop_ws.is_running()) # thread_ws.join() print(cameracapture.release...) cameracapture.release() print(quit...) # print(server_ws) sys.exit(0)# http服务器请求处理:网页、mjpeg数据流class camhandler(basehttprequesthandler): def do_get(self): # mjpeg推流 if self.path.endswith('.mjpg'): self.send_response(200) self.send_header('content-type','multipart/x-mixed-replace; boundary=--jpgboundary') self.end_headers() while true: if is_stop: break try: # rc,img = cameracapture.read() rc,img = success,frame if not rc: continue if true: imgrgb=cv2.cvtcolor(img,cv2.color_bgr2rgb) jpg = image.fromarray(imgrgb) tmpfile = bytesio() jpg.save(tmpfile,'jpeg') self.wfile.write(b--jpgboundary) self.send_header(b'content-type','image/jpeg') self.send_header(b'content-length',str(tmpfile.getbuffer().nbytes)) self.end_headers() jpg.save(self.wfile,'jpeg') else: img_fps = jpeg_quality_value img_param = [int(cv2.imwrite_jpeg_quality), img_fps] img_str = cv2.imencode('.jpg', img, img_param)[1].tobytes() # change image to jpeg format self.send_header('content-type','image/jpeg') self.end_headers() self.wfile.write(img_str) self.wfile.write(b--jpgboundary) # end of this part time.sleep(0.033) except keyboardinterrupt: self.wfile.write(b--jpgboundary--) break except brokenpipeerror: continue return # 网页 if self.path == '/' or self.path.endswith('.html'): self.send_response(200) self.send_header('content-type','text/html') self.end_headers() self.wfile.write(b'live video') self.wfile.write(('' % self.headers.get('host')).encode()) self.wfile.write(b'') returnclass threadedhttpserver(threadingmixin, httpserver): handle requests in a separate thread.# 启动mjpeg服务def mjpeg_server_star(): global success global server global thread_mjpeg try: server = threadedhttpserver(('0.0.0.0', mjpeg_server_port), camhandler) print(mjpeg server started: http://0.0.0.0:%d % mjpeg_server_port) # server.serve_forever() thread_mjpeg = thread(target=server.serve_forever); thread_mjpeg.start() except keyboardinterrupt: print(mjpeg server stoping...) server.socket.close() server.shutdown() print(mjpeg server stoped)# websocket服务请求处理async def camtransmithandler(websocket, path): print(client connected !) try : while true: # rc,img = cameracapture.read() rc,img = success,frame if not rc: continue img_fps = jpeg_quality_value img_param = [int(cv2.imwrite_jpeg_quality), img_fps] encoded = cv2.imencode('.jpg', img, img_param)[1] data = str(base64.b64encode(encoded)) data = data[2:len(data)-1] await websocket.send(data) # cv2.imshow(transimission, frame) # if cv2.waitkey(1) & 0xff == ord('q'): # break # cap.release() except exception_connection_close as e: print(client disconnected !) # cap.release() except: print(someting went wrong !)# websocket服务器启动def websocket_server_start(): global thread_ws global server_ws global event_loop_ws event_loop_ws = asyncio.new_event_loop() def run_server(): global server_ws print(websocket server started: ws://0.0.0.0:%d % websocket_port) server_ws = websockets.serve(camtransmithandler, port=websocket_port, loop=event_loop_ws) event_loop_ws.run_until_complete(server_ws) event_loop_ws.run_forever() thread_ws = thread(target=run_server) thread_ws.start() # try: # yield # except e: # print(an exception occurred) # finally: # event_loop.call_soon_threadsafe(event_loop.stop)# 获取存储的文件名def get_file_name(time_obj, path, ext): file_name_time = %04d-%02d-%02d_%02d-%02d-%02d % (time_obj[year], time_obj[month], time_obj[day], time_obj[hour], time_obj[min], 0) return '%s/%s/%s.%s' % (store_dir, path, file_name_time, ext)# 获取当前整分时间def get_current_time(): time_now = time.localtime() time_int = int(time.time()) return { year: time_now.tm_year, month: time_now.tm_mon, day: time_now.tm_mday, hour: time_now.tm_hour, min: time_now.tm_min, sec: time_now.tm_sec, time: time_int - time_now.tm_sec }# 设置信号回调signal.signal(signal.sigint, signal_handler)signal.signal(signal.sigterm, signal_handler)# 捕获摄像头cameracapture = cv2.videocapture(camera_no)# 摄像头参数设置cameracapture.set(cv2.cap_prop_frame_width, 320)cameracapture.set(cv2.cap_prop_frame_width, 240)cameracapture.set(cv2.cap_prop_saturation, 135)fps = 30size=(int(cameracapture.get(cv2.cap_prop_frame_width)),int(cameracapture.get(cv2.cap_prop_frame_height)))# 读取捕获的数据success,frame = cameracapture.read()if not success: print(camera start failed.) quit()is_stop = falseserver = noneserver_ws = noneevent_loop_ws = nonethread_mjpeg = nonethread_ws = nonemjpeg_server_star()websocket_server_start()print(record server star:)thubm_file = nonevideo_file = nonetime_start = int(time.time())time_record = {time:0}time_record_prev = nonewhile true: if is_stop: success = false break; success,frame = cameracapture.read() if not success: continue time_now = get_current_time() if time_now[time] - time_record[time] >= rotate_time: if time_record_prev: thubm_file = get_file_name(time_record_prev, 'thumbs', 'jpg') print([info] write to thumb: %s % thubm_file) if not os.path.isfile(thubm_file): cv2.imwrite(thubm_file, frame) time_record = time_now time_record_prev = get_current_time() video_file = get_file_name(time_record_prev, 'videos', media_ext) print([info] write to video: %s % video_file) # encode = cv2.videowriter_fourcc(*mp4v) encode = cv2.videowriter_fourcc(*'x264') # encode = cv2.videowriter_fourcc(*'avc1') # encode = cv2.videowriter_fourcc(*'xvid') # encode = cv2.videowriter_fourcc(*'h264') videowriter=cv2.videowriter(video_file, encode,fps,size) # mp4 numframeremaining = rotate_time * fps #摄像头捕获持续时间 while success and numframeremaining > 0: videowriter.write(frame) success,frame = cameracapture.read() numframeremaining -= 1cameracapture.release()在上述代码中,除了前面说过的三个部分,还包括启动web和websocket线程的部分。因为核心逻辑为读取视频数据并写入文件,所以其他部分,以线程的模式启动,以便同时进行处理。将上述代码保存为drivingrecorderandmjpegserver.py,然后运行即可。(依赖包,见代码库中requirements.txt)实际访问效果如下:六、历史数据restful服务开发历史数据服务,本来也可以使用python直接手写,但考虑到可扩展性,使用django来进行了编写。djano服务,需要提供如下的功能:
提供api接口,以便获取历史数据记录列表,便于前端界面呈现展示提供flutter web界面代码文件的托管,以便通过浏览器访问提供静态文件的访问,例如查看历史视频文件2和3本质都是一个问题,通过django的static功能,就能实现。也就是在settings.py配置中,提供下面的配置即可:
     static_url = 'static/'staticfiles_dirs = [ base_dir / static]1对外提供api服务,则需要设置对应的url接口,以及读取历史文件信息,生成前端需要的json数据结构,这部分的具体代码如下:
                                                                                           # 媒体文件存放目录,以及缩略图和视频文件的后缀thumb_home_dir = %s/%s/data/thumbs/ % (base_dir, static_url)video_home_dir = %s/%s/data/videos/ % (base_dir, static_url)img_filter = [.jpg]media_filter = [ .mkv]import jsonfrom django.shortcuts import render, httpresponsefrom rest_framework.response import responsefrom rest_framework.permissions import allowanyfrom rest_framework.decorators import api_view, permission_classesimport osfrom django.conf import settingsthumb_home_dir = settings.thumb_home_dirvideo_home_dir = settings.video_home_dirimg_filter = settings.img_filtermedia_filter = settings.media_filter# create your views here.@api_view(['get'],)@permission_classes([allowany],)def hello_django(request): str = '''[ { id: 1, time: 2022-07-28 21:00, title: 2022-07-28 21:00, body: videos/2022-07-28_2100.mp4 }, { id: 2, time: 2022-07-28 23:00, title: 2022-07-28 23:00, body: videos/2022-07-28_2300.mp4 }, { id: 3, time: 2022-07-28 25:00, title: 2022-07-28 25:00, body: videos/2022-07-28_2500.mp4 }]''' _json = json.loads(str) return httpresponse(json.dumps(_json), content_type='application/json')@api_view(['get'],)@permission_classes([allowany],)def history_list(request): next = request.get.get(next, '') print(fthumb next = {next}) path = /.join(request.path.split(/)[3:]) print(fthumb request.path= {request.path}) print(fthumb path = {path}) #print os.listdir(file_home_dir+.none/) data = {files:[], dirs:[]} print(data) child_path = thumb_home_dir+next print(fchild_path = {child_path}) data['cur_dir'] = path+next print(data) for dir in os.listdir(child_path): if os.path.isfile(child_path+/+dir): if os.path.splitext(dir)[1] in img_filter: data['files'].append(dir) else: data['dirs'].append(dir) print(data) data['files']=sorted(data['files']) data['files'].reverse() data['infos'] = [] for i in range(0,len(data['files'])): thumb_name = data['files'][i] video_name = thumb_name.replace('.jpg', media_filter[0]) file_time = thumb_name.replace('.jpg', '').replace('_', ' ') data['infos'].append( { id: i, time: file_time, title: file_time, body: thumb_name, 'thumb': thumb_name, 'video': video_name } ) return response(data['infos'], status = 200) 其中有两个接口:hello_django是最开始学习使用的,返回写死的json数据。history_list,则是自动遍历缩略图文件夹,获取缩略图文件信息,并生成所需要的json数据格式。在对应的代码库文件中,也包含了requirements.txt,其中标明了实际需要的依赖库。下载代码,进入manage.py所在的目录后,执行下面的命令即可启动:访问 192.168.1.15:8000/app/hellodjango :访问:history list – django rest framework
可以看到 history_list接口,已经可以提供实际需要的数据了。七、flutter web界面开发这个部分设计的代码比较多,所以只对关键部分的代码进行说明。开发的实际代码,位于lib目录,具体为:
globals.dart:全局变量定义main.dart:程序入口home_page.dart:首页live_page.dart:实时播放live_page_mp4.dart:测试播放mp4视频history_page.dart:历史记录列表页面video_detail.dart:单条历史记录详情video_play.dart:播放具体的历史视频video_model.dart:单条记录的数据模型http_service.dart:请求restful接口websocket.dart:实时视频的websocket请求整个界面,使用了scaffold来模拟手机/pad的操作界面,具体界面如下:在实时画面界面中,使用了websocket监听,获取到信息,就使用stream模式,推送给视频播放。在历史记录界面中,则通过restful请求列表数据,然后呈现。
八、整体运行效果实际的运行效果,不用多说,看界面就成:
实时画面:历史记录列表:历史记录播放:九、车试:经过反复的测试验证,确保各项功能完整后,进行了上车实测。
因为最近的疫情原因,所以只在村里转了一圈,进行了实际测试,可以查看最后的视频。后续有机会,再找个晴朗的天气,去环境优美的地方实际拍摄录制。
十、实际代码说明:完整的代码,请通过 米尔行车记录仪: 米尔行车记录仪 (https://gitee.com/honestqiao/myir-driving-recorder) 获取。代码目录说明如下:
drivingrecorder:摄像头服务backend:restful服务frontend:flutter web界面在以上仓库中,包含了详细的代码使用说明。在实际应用中,将记录视频的data目录与后端static/data目录关联,以便两者统一。
十一、感谢在研究学习的过程中,参考了数十篇各类资料,先将部分列出如下。对所有学习过的资料的作者,表示深深的感谢。
janakj/py-mjpeg: python mjpeg streaming utilities (github.com)simple python motion jpeg (mjpeg server) from webcam. using: opencv,basehttpserver (github.com)python 使用usb camera录制mp4视频_frank_abagnale的博客-csdn博客用 python、nginx 搭建在线家庭影院 - 知乎 (zhihu.com)django报错解决:runtimeerror: model class ...apps... doesn't declare an explicit app_label and isn't in a_lyp039078的博客-csdn博客python opencv 调用摄像头并截图保存_clannad_niu的博客-csdn博客用 python、nginx 搭建在线家庭影院mob604756e97f09的技术博客51cto博客python-opencv录制h264编码的mp4视频 - 掘金 (juejin.cn)****[videowriter]保存h264/mpeg4格式mp4视频 - image processing (zj-image-processing.readthedocs.io)manual usb camera settings in linux | kurokesuuvc web cameras (indilib.org)编写你的第一个 flutter 网页应用 - flutter 中文文档 - flutter 中文开发者网站 - fluttermacos install | flutter[django 設定 language_code 時所遇到的麻煩] oserror: no translation files found for default language zh-tw. (github.com)**django and flutter — 样板应用程序|的分步教程作者:clever tech memes |中等 (medium.com)joke2k/django-environ: django-environ allows you to utilize 12factor inspired environment variables to configure your django application. (github.com)django 之跨域访问问题解决 access-control-allow-origin - 腾讯云开发者社区-腾讯云 (tencent.com)django-cors-headers · pypidjango项目解决跨域问题 - segmentfault 思否video_player | flutter package (pub.dev)视频的播放和暂停 - flutter 中文文档 - flutter 中文开发者网站 - flutter5.7 页面骨架(scaffold) | 《flutter实战·第二版》 (flutterchina.club)itfitness/bottomnavigationbardemo - 码云 - 开源中国 (gitee.com)flutter底部导航 - 简书 (jianshu.com)flutter之自定义底部导航条以及页面切换实例——flutter基础系列houruoyu3的博客-csdn博客flutter 自定义底部导航how to use http requests in flutter | digitalocean在flutter中发起http网络请求 - flutter中文网 (flutterchina.club)fetch data from the internet | flutter获取网络数据 - flutter 中文文档 - flutter 中文开发者网站 - flutter深入理解 function & closure - flutter 中文文档 - flutter 中文开发者网站 - flutterdjango and flutter — a step by step tutorial for a boilerplate application | by clever tech memes | mediummultithreading - multithreaded web server in python - stack overflowsimple python http server with multi-threading and partial-content support (github.com)meska/mjpeg_stream_webcam: webcam streamer for octoprint macos (github.com)blueimp/mjpeg-server: mjpeg server implements mjpeg over http using ffmpeg or any other input source capable of piping a multipart jpeg stream to stdout. its primary use case is providing webdriver screen recordings. (github.com)n3wtron/simple_mjpeg_streamer_http_server: simple python mjpeg streamer http server (github.com)python3远程监控程序实现肥宅sean的博客-csdn博客opencv imencode跟imdecode函数jpg(python) - pythontechworldflutter_mjpeg | flutter package (pub.dev)can't work on web platform. · issue #13 · mylisabox/flutter_mjpeg (github.com)consider if fetch is widely supported enough to use · issue #595 · dart-lang/http (github.com)在 flutter | 中创建实时视频流应用程序作者:mitrajeet golsangi |开发人员学生社区 vishwakarma 技术学院,浦那 |中等 (medium.com)python websockets.serve方法代碼示例 - 純淨天空 (vimsky.com)flutter 常用組件講解 | imagewidget - it 邦幫忙::一起幫忙解決難題,拯救 it 人的一天 (ithome.com.tw)bad state: stream has already been listened to. · issue #29105 · flutter/flutter (github.com)flutter - streambuilder with websockets stream in tabbarview: bad state: stream has already been listened to - stack overflow使用websockets - flutter中文网 (flutterchina.club)videostreaming.dart (github.com)在 flutter | 中创建实时视频流应用程序作者:mitrajeet golsangi |开发人员学生社区 vishwakarma 技术学院,浦那 |中等 (medium.com)flutter:websocket封装-实现心跳、重连机制 - 让我留在你身边 (ricardolsw.github.io)2.3 状态管理 | 《flutter实战·第二版》 (flutterchina.club)路由和导航 - flutter 中文文档 - flutter 中文开发者网站 - flutter7.6 异步ui更新(futurebuilder、streambuilder) | 《flutter实战·第二版》 (flutterchina.club)flutter 常用組件講解 | imagewidget - it 邦幫忙::一起幫忙解決難題,拯救 it 人的一天 (ithome.com.tw)step by step tutorial in learning flutter: lesson 12 — adding image (ice pokemon) | by misterflutter | quick code | mediumdjango cors on static asset - stack overflow配置 | django 文档 | django (djangoproject.com)global variables in dart - stack overflowuvc - community help wiki (ubuntu.com)利用opencv进行h264视频编码的简易方式 - 知乎 (zhihu.com)documentation for opencv_ffmpeg_writer_options and opencv_ffmpeg_capture_options · issue #21155 · opencv/opencv (github.com)利用opencv进行h264视频编码的简易方式 - 知乎 (zhihu.com)ffmpeg概述及编码支持 - 知乎 (zhihu.com)web 渲染器 - flutter 中文文档 - flutter 中文开发者网站 - flutter十二、总结在研究学习的过程中,对linux系统下的uvc框架有了进一步的了解,对flutter进行应用开发有了实际的了解,对opencv的实际应用也有了具体的了解。在实际开发的过程中,遇到的最大的坑来自flutter,因为变化太快,有一些功能可能兼容性没有跟上。不过更多是自己学艺不精导致的。另外,目前还只是v1.0版本,后续还存在较大的优化空间。例如对于opencv的应用,可以调整参数,优化获取的视频数据的指令和大小等。这些有待于进一步学习后进行。最主要的,对米尔myd-yt507开发板有了深入的了解,进行了实际的应用。作为一款车规级处理器t507的开发板,名不虚传!

PE薄膜瑕疵检测仪的优势及技术参数
MIC5156, MIC5157, MIC5158应用电路图
工业边缘计算为什么说是一条“黑马”
友达光电本月底停工新加坡工厂,影响500员工
荣耀note9什么时候上市?荣耀note9紧随华为mate10的脚步,麒麟970+双摄的超大屏手机
实操丨米尔MYD-YT507H开发板基于Fluter+Django+OpenCV的行车记录仪
阿里智慧物流力争24小时送达 47亿拿下申通股权谈初心
SAE发布用于评估电动汽车功率的新标准
记住这些公司,未来他们或将是人工智能界的龙头老大
飞利浦照明在特殊照明这一领域的强势领导力
步小米、OPPO后尘!又一企业收缩芯片战线 今年应届生或被全部优化
JVM入门之认识程序运行本质
LG Display大尺寸OLED出货量预计660万片
星环科技TDH社区版可实现上百亿条数据管理分析
基于BMR200工业无线路由器与5G网络实现叉车电池远程在线监测
杭电机械工程学院研究出一款高度智能快递派送机器人
是德科技Infiniium UXR 系列示波器帮您获得更深洞察力
半导体推力测试仪优化了测试模块
发展中国家需要一种怎样的加密货币
芯片现在还短缺吗