Springboot实现人脸识别与WebSocket长连接的实现代码
作者:默默努力的小老弟
这篇文章主要介绍了Springboot实现人脸识别与WebSocket长连接的实现,本文通过示例代码给大家介绍的非常详细,对大家的学习或工作具有一定的参考借鉴价值,需要的朋友参考下吧
0.什么是WebSocket,由于普通的请求是间断式发送的,如果要同一时间发生大量的请求,必然导致响应速度慢(因为根据tcp协议要经过三层握手,如果不持续发送,就会导致n多次握手,关闭连接,打开连接)
1.业务需求: 由于我需要使用java来处理视频的问题,视频其实就是图片,相当于每张图片就是帧,不停发送帧去实现人脸失败,然后返回处理结果,(支付宝刷脸支付也是同样的道理)
2.前端建立WebSocket()对象,onMessage函数监听返回的结果
<!DOCTYPE html> <html> <head> <title>视频帧捕获</title> </head> <body> <video id="videoElement" autoplay></video> <canvas id="canvasElement" style="display: none;"></canvas> <script> //如果是https协议的话,就需要改为 wss var socket = new WebSocket("ws://localhost:8080/facedetect"); const video = document.getElementById('videoElement'); const canvas = document.getElementById('canvasElement'); const context = canvas.getContext('2d'); socket.onopen = function() { console.log("xxxx"); // 每1秒发送一次视频帧数据,必须要在这里写定时器,因为打开连接后才能发送请求,不然每次都会报Websocket close的错误 setInterval(captureFrame,10000) }; socket.onmessage = function(event) { var result = event.data; // 处理服务器返回的结果 console.log(result);//打印出结果 }; socket.onclose = function(event) { console.log("WebSocket已关闭"); }; socket.onerror = function(event) { console.error('WebSocket错误:', event); }; navigator.mediaDevices.getUserMedia({ video: true }) .then(stream => { video.srcObject = stream; }) .catch(error => { console.error('无法访问摄像头:', error); }); function captureFrame() { context.drawImage(video, 0, 0, canvas.width, canvas.height); const imageDataUrl = canvas.toDataURL('image/jpeg', 0.5); console.log(imageDataUrl) socket.send(imageDataUrl); // 将数据URL发送到WebSocket服务器 } // 每隔一段时间捕获一帧并发送到Servlet </script> </body> </html>
3.后端写配置类,配置websocket的路径
@Configuration @EnableWebSocket public class WebSocketConfig implements WebSocketConfigurer { @Override public void registerWebSocketHandlers(WebSocketHandlerRegistry registry) { //录入人脸数据页面 registry.addHandler(myHandler(), "/face").setAllowedOrigins("*"); //人脸识别页面 registry.addHandler(myHandler1(), "/facedetect").setAllowedOrigins("*"); } @Bean public WebSocketHandler myHandler() { return new FaceController(); } @Bean public WebSocketHandler myHandler1() { return new FaceController1(); } }
4.写controller
//人脸录入的controller @Controller @RequestMapping("/face") @CrossOrigin public class FaceController extends TextWebSocketHandler { private WebSocketSession session; // 处理WebSocket连接请求 @Override public void afterConnectionEstablished(WebSocketSession session) throws Exception { System.out.println("WebSocket连接已建立"); // 保存WebSocket会话 this.session = session; } // 处理WebSocket文本消息 @Override protected void handleTextMessage(WebSocketSession session, TextMessage message) throws Exception { String text = message.getPayload(); System.out.println(text); text = text.replaceFirst("^data:image/[^;]+;base64,?\\s*", ""); text = text.replaceAll("[^A-Za-z0-9+/=]", ""); System.out.println(text); byte[] imageBytes = Base64.getDecoder().decode(text); if (imageBytes != null) { try { // 读取字节数组并返回BufferedImage对象 ByteArrayInputStream bis = new ByteArrayInputStream(imageBytes); BufferedImage bufferedImage = ImageIO.read(bis); if (bufferedImage != null) { // 示例:显示图像宽度和高度 int width = bufferedImage.getWidth(); int height = bufferedImage.getHeight(); System.out.println("图像宽度:" + width); System.out.println("图像高度:" + height); //录入人脸 Employee e1 = HRService.addEmp(UUID.randomUUID().toString().substring(0,10), bufferedImage); ImageService.saveFaceImage(bufferedImage, e1.getCode());// 保存员工照片文件 System.out.println(e1.getCode()); // 在这里可以对BufferedImage对象进行其他操作 } else { System.out.println("无法读取图像"); } } catch (Exception e) { e.printStackTrace(); } } else { System.out.println("无效的base64数据"); } } // 根据接收到的文本消息进行相应的处理 }
//人脸检测的控制器 @Controller @RequestMapping("/facedetect") @CrossOrigin public class FaceController1 extends TextWebSocketHandler { private WebSocketSession session; // 处理WebSocket连接请求 @Override public void afterConnectionEstablished(WebSocketSession session) throws Exception { System.out.println("WebSocket连接已建立"); // 保存WebSocket会话 this.session = session; } // 处理WebSocket文本消息 @Override protected void handleTextMessage(WebSocketSession session, TextMessage message) throws Exception { System.out.println("detect"); String text = message.getPayload(); System.out.println(text); text = text.replaceFirst("^data:image/[^;]+;base64,?\\s*", ""); text = text.replaceAll("[^A-Za-z0-9+/=]", ""); System.out.println(text); byte[] imageBytes = Base64.getDecoder().decode(text); if (imageBytes != null) { try { // 读取字节数组并返回BufferedImage对象 ByteArrayInputStream bis = new ByteArrayInputStream(imageBytes); BufferedImage bufferedImage = ImageIO.read(bis); if (bufferedImage != null) { FaceEngineService.loadAllFaceFeature(); FaceFeature faceFeature = FaceEngineService.getFaceFeature(bufferedImage); // 获取当前帧中出现的人脸对应的特征码 String code = FaceEngineService.detectFace(faceFeature); System.out.println(code); if (code != null) {// 如果特征码不为null,表明画面中存在某员工的人脸 Employee e = HRService.getEmp(code);// 根据特征码获取员工对象 HRService.addClockInRecord(e);// 为此员工添加打卡记录 // 文本域添加提示信息 session.sendMessage(new TextMessage("打卡成功")); } // 在这里可以对BufferedImage对象进行其他操作 } else { session.sendMessage(new TextMessage("打卡成功")); } } catch (Exception e) { e.printStackTrace(); } } else { System.out.println("无效的base64数据"); } } // 根据接收到的文本消息进行相应的处理 }
到此这篇关于Springboot实现人脸识别与WebSocket长连接的实现的文章就介绍到这了,更多相关Springboot WebSocket长连接内容请搜索脚本之家以前的文章或继续浏览下面的相关文章希望大家以后多多支持脚本之家!