当前位置: 首页 > 知识库问答 >
问题:

Python中的人脸识别

满雨石
2023-03-14

我能够找到这些面孔,并使用python将它们保存在本地目录中,然后根据下面视频中的代码打开cv

import cv2
import numpy as np
import os

vc = cv2.VideoCapture('new1.avi')
c=1

if vc.isOpened():
    rval , frame = vc.read()
else:
    rval = False

while rval:
	rval, frame = vc.read()
	cv2.imwrite(str(c) + '.jpg',frame)
	image_name=str(c)+'.jpg'
	cascPath = "haarcascade_frontalface_default.xml"
	faceCascade = cv2.CascadeClassifier(cascPath)

	image=cv2.imread(image_name)
	gray=cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)


	faces = faceCascade.detectMultiScale(
		gray,
		scaleFactor=1.2,
		minNeighbors=5,
		minSize=(30, 30),
		flags = cv2.cv.CV_HAAR_SCALE_IMAGE
	)

	print "Found {0} faces!".format(len(faces))

	if len(faces)>=1:
		for (x, y, w, h) in faces:
			cv2.rectangle(image, (x, y), (x+w, y+h), (0, 255, 0), 2)
		cv2.imshow("Faces found" ,image)
		cv2.waitKey(0)
		
		
	else:
		a="rm "+image_name
		os.popen(a)

	c = c + 1
	cv2.waitKey(1)
        
vc.release()

但是现在我想知道那个视频里有脸的人的身份......

我如何定义此人的身份?

喜欢扫描人脸并将其匹配到本地人脸数据库中,如果找到匹配项,请给出姓名等

共有2个答案

景翰音
2023-03-14

您可以使用特征脸识别器、FisherFaceRecognizer或LBHP

这三种算法都内置于python中

# Create a recognizer object  
recognizer = cv2.face.createEigenFaceRecognizer()
# But Remember for EigenFaces all the images whether training or testing has to be of same shape
    #==========================================================================
    # get_images_and_labels function will give us list of images and list of labels to train our recognizer that we created in the first line
    # function requires the path of the directory where all the images is stored
    #===========================================================================
    def get_images_and_labels(path):
        # Append all the absolute image paths in a list image_paths
        image_paths = [os.path.join(path, f) for f in os.listdir(path) if not 
        f.endswith('.sad')]
        # images will contains face images
        images = []
        # labels will contains the label that is assigned to the image
        labels = []
        final_images = []
        largest_image_size = 0
        largest_width = 0
        largest_height = 0

        for image_path in image_paths:
           # Read the image and convert to grayscale
           image_pil = Image.open(image_path).convert('L')
           # Convert the image format into numpy array
           image = np.array(image_pil, 'uint8')
           # Get the label of the image
           nbr = int(os.path.split(image_path)[1].split(".")[0].replace("subject", ""))
           # Detect the face in the image
           faces = faceCascade.detectMultiScale(image)
           # If face is detected, append the face to images and the label to labels

        for (x, y, w, h) in faces:
            images.append(image[y: y + h, x: x + w])
            labels.append(nbr)
            cv2.imshow("Adding faces to traning set...", image[y: y + h, x: x + w])
            cv2.waitKey(50)
           # return the images list and labels list

        for image in images:
            if image.size > largest_image_size:
                largest_image_size = image.size
        largest_width, largest_height = image.shape

        for image in images:
            image = cv2.resize(image, (largest_width, largest_height), interpolation=cv2.INTER_CUBIC)
            final_images.append(image)

    return final_images, labels, largest_width, largest_height

#===================================================================
# Perform the tranining
# trainer takes two parameters as input
# first parameter is the list of images
# second parameter is a numpy array of their corresponding labels
#===================================================================
recognizer.train(images, np.array(labels)) # training takes as input the   list 



image_paths = [os.path.join(path, f) for f in os.listdir(path) if f.endswith('.sad')]
for image_path in image_paths:
    predict_image_pil = Image.open(image_path).convert('L')
    predict_image = np.array(predict_image_pil, 'uint8')
    faces = faceCascade.detectMultiScale(predict_image)
    for (x, y, w, h) in faces:
        result = cv2.face.MinDistancePredictCollector()
        predict_image = predict_image[y: y + h, x: x + w] 
        predict_image = cv2.resize(predict_image, (max_width, max_heigth), interpolation=cv2.INTER_CUBIC)

        # =========================================================  
        # predict method will give us the prediction
        # we will get the label in the next statement
        # predicted_image is the image that you want to recognize 
        # =========================================================  
        recognizer.predict(predict_image, result, 0) # this statement will give the prediction

        # ========================================== 
        # This statement below will give us label 
        # ========================================== 
        nbr_predicted = result.getLabel() 

        # ========================================== 
        # conf will tell us how much confident our recognizer is in it's prediction
        # ========================================== 
        conf = result.getDist()
        nbr_actual = int(os.path.split(image_path)[1].split(".")[0].replace("subject", ""))
        if nbr_actual == nbr_predicted:
           print("{} is Correctly Recognized with confidence {}".format(nbr_actual, conf))
        else:
            print("{} is Incorrect Recognized as {}".format(nbr_actual, nbr_predicted))
sys.exit()
宓英哲
2023-03-14

区分照片中的人并不是一件小事,但也有一些例子。正如德曼在之前的评论中提到的,最好的方法是使用机器学习来教程序不同的人的脸是什么样子。一种方法是手动查找和提取人脸的特征,例如眼睛之间的距离与眼睛和嘴巴之间的距离等。不过,这需要注意镜头畸变和透视的影响。有多篇研究论文讨论了最佳技术,比如本文使用一组人脸的特征向量来寻找使用特征人脸的最可能匹配人脸识别

Python有一个机器学习工具箱,名为scikit learn,它实现了对分类、回归、聚类等的支持。你可以用它来训练神经网络和支持向量机等。下面是一个完整的示例,演示了如何使用支持向量机、scikit learn和python实现特征脸方法:使用python完成实现

 类似资料:
  • 使用ML Kit的人脸识别API,您可以检测图像中的人脸并识别关键面部特征。 借助人脸识别功能,您可以获取所需的信息,以执行修饰自拍和美化人像等任务或从用户照片中生成头像。由于ML Kit可以执行实时的人脸识别,因此您可以将其用于视频聊天或会对玩家表情进行响应的游戏等应用程序。 iOS Android 核心功能 识别和定位面部特征 获取检测到的每个人脸的眼睛,耳朵,脸颊,鼻子和嘴巴的坐标。 识别面

  • 1.1. 1.FACE SDK集成 1.2. 2. 接口说明及示例 1.2.1. 2.0 人脸检测参数配置: 1.2.2. 2.1 单帧图片检测: 1.2.3. 2.2 相机预览人脸检测: 1.2.4. 2.3 人脸数据库操作: Version:facelib.aar 1.1. 1.FACE SDK集成 添加三方依赖库: dependencies { compile 'com.rokid:

  • DWZ 百度人脸识别模块 dwzBaiduFaceLive 百度人脸识别模块【apicloud】 功能介绍 https://www.apicloud.com/mod_detail/dwzBaiduFaceLive 封装了新版百度开放平台的人脸识别采集 SDK: 包含活体动作 faceLiveness 不包含活体动作 faceDetect 考虑灵活度问题,本模块只作人脸采集,人脸识别成功后生成 ba

  • 本文向大家介绍简单的Python人脸识别系统,包括了简单的Python人脸识别系统的使用技巧和注意事项,需要的朋友参考一下 案例一 导入图片 思路: 1.导入库 2.加载图片 3.创建窗口 4.显示图片 5.暂停窗口 6.关闭窗口 案例二 在图片上添加人脸识别 思路: 1.导入库 2.加载图片 3.加载人脸模型 4.调整图片灰度 5.检查人脸 6.标记人脸 7.创建窗口 8.显示图片 9.暂停窗口

  • DWZ 百度人脸识别插件 dwz-BaiduFaceLive 百度人脸识别插件【dcloud】 功能介绍 https://ext.dcloud.net.cn/plugin?id=4794 封装了新版百度开放平台的人脸识别采集 SDK: 包含活体动作 faceLiveness 不包含活体动作 faceDetect 考虑灵活度问题,本插件只作人脸采集,人脸识别成功后生成 base64 头像图片,开发者

  • 请求URL /api/v1/vision/face-comparison 请求方法 POST Header Content-Type application/json body请求体 { "FirstFace": { "FaceImage": { "Content": "base64 image string" }, },