!14 版本升级到2.1.0
Merge pull request !14 from divenswu/feature-seeta-face6
55
README.md
@ -30,6 +30,15 @@
|
||||
|
||||
    2、[PCN](https://github.com/Rock-100/FaceKit/tree/master/PCN)
|
||||
|
||||
### 版本2.1.0更新
|
||||
|
||||
* 1、InsightScrfdFaceDetection升级模型,使检测更加稳定,同时添加了人脸角度检测。
|
||||
* 2、InsightScrfdFaceDetection正对不能正常检出人脸的图片增加了补边操作,防止因为人脸过大导致不能检测到人脸。
|
||||
* 3、添加SeetaFaceOpenRecognition的人脸特征提取器,目前人脸特征提取器支持InsightArcFaceRecognition与SeetaFaceOpenRecognition。
|
||||
* 4、修复由于人脸过小,导致对齐异常的BUG。
|
||||
* 5、程序添加了SeetaFace6的人脸关键点遮挡模型。
|
||||
* 6、升级opencv、opensearch、onnxruntime的maven依赖版本。
|
||||
|
||||
### 版本2.0.1更新
|
||||
|
||||
* 1、修复PCN模型存在的潜在内存泄露问题
|
||||
@ -41,7 +50,7 @@
|
||||
|
||||
### 项目文档
|
||||
|
||||
* 在线文档:[文档-2.0.1](scripts/docs/2.0.0.md)
|
||||
* 在线文档:[文档-2.1.0](scripts/docs/2.1.0.md)
|
||||
|
||||
* swagger文档:启动项目且开启swagger,访问:host:port/doc.html, 如 http://127.0.0.1:8080/doc.html
|
||||
|
||||
@ -52,12 +61,12 @@
|
||||
<dependency>
|
||||
<groupId>com.visual.face.search</groupId>
|
||||
<artifactId>face-search-client</artifactId>
|
||||
<version>2.0.1</version>
|
||||
<version>2.1.0</version>
|
||||
</dependency>
|
||||
```
|
||||
* 其他语言依赖
|
||||
|
||||
   使用restful接口:[文档-2.0.1](scripts/docs/2.0.0.md)
|
||||
   使用restful接口:[文档-2.1.0](scripts/docs/2.1.0.md)
|
||||
|
||||
|
||||
### 项目部署
|
||||
@ -89,41 +98,41 @@
|
||||
|
||||
* 部署参数
|
||||
|
||||
| 参数 | 描述 | 默认值 | 可选值|
|
||||
| -------- | -----: | :----: |--------|
|
||||
| VISUAL_SWAGGER_ENABLE | 是否开启swagger | true | |
|
||||
| SPRING_DATASOURCE_URL | 数据库地址 | | |
|
||||
| SPRING_DATASOURCE_USERNAME | 数据库用户名 | root | |
|
||||
| SPRING_DATASOURCE_PASSWORD | 数据库密码 | root | |
|
||||
| VISUAL_ENGINE_OPENSEARCH_HOST | OPENSEARCH地址 | | |
|
||||
| VISUAL_ENGINE_OPENSEARCH_PORT | OPENSEARCH端口 | 9200 | |
|
||||
| VISUAL_ENGINE_OPENSEARCH_SCHEME | OPENSEARCH协议 | https | |
|
||||
| VISUAL_ENGINE_OPENSEARCH_USERNAME | OPENSEARCH用户名 | admin | |
|
||||
| VISUAL_ENGINE_OPENSEARCH_PASSWORD | OPENSEARCH密码 | admin | |
|
||||
| VISUAL_MODEL_FACEDETECTION_NAME | 人脸检测模型名称 | PcnNetworkFaceDetection |PcnNetworkFaceDetection,InsightScrfdFaceDetection|
|
||||
| VISUAL_MODEL_FACEDETECTION_BACKUP_NAME | 备用人脸检测模型名称 | InsightScrfdFaceDetection |PcnNetworkFaceDetection,InsightScrfdFaceDetection|
|
||||
| VISUAL_MODEL_FACEKEYPOINT_NAME | 人脸关键点模型名称 | InsightCoordFaceKeyPoint |InsightCoordFaceKeyPoint|
|
||||
| VISUAL_MODEL_FACEALIGNMENT_NAME | 人脸对齐模型名称 | Simple106pFaceAlignment |Simple106pFaceAlignment,Simple005pFaceAlignment|
|
||||
| VISUAL_MODEL_FACERECOGNITION_NAME | 人脸特征提取模型名称 | InsightArcFaceRecognition |InsightArcFaceRecognition|
|
||||
| 参数 | 描述 | 默认值 | 可选值 |
|
||||
| -------- | -----: | :----: |---------------------------------------------------|
|
||||
| VISUAL_SWAGGER_ENABLE | 是否开启swagger | true | |
|
||||
| SPRING_DATASOURCE_URL | 数据库地址 | | |
|
||||
| SPRING_DATASOURCE_USERNAME | 数据库用户名 | root | |
|
||||
| SPRING_DATASOURCE_PASSWORD | 数据库密码 | root | |
|
||||
| VISUAL_ENGINE_OPENSEARCH_HOST | OPENSEARCH地址 | | |
|
||||
| VISUAL_ENGINE_OPENSEARCH_PORT | OPENSEARCH端口 | 9200 | |
|
||||
| VISUAL_ENGINE_OPENSEARCH_SCHEME | OPENSEARCH协议 | https | |
|
||||
| VISUAL_ENGINE_OPENSEARCH_USERNAME | OPENSEARCH用户名 | admin | |
|
||||
| VISUAL_ENGINE_OPENSEARCH_PASSWORD | OPENSEARCH密码 | admin | |
|
||||
| VISUAL_MODEL_FACEDETECTION_NAME | 人脸检测模型名称 | InsightScrfdFaceDetection | PcnNetworkFaceDetection,InsightScrfdFaceDetection |
|
||||
| VISUAL_MODEL_FACEDETECTION_BACKUP_NAME | 备用人脸检测模型名称 | PcnNetworkFaceDetection | PcnNetworkFaceDetection,InsightScrfdFaceDetection |
|
||||
| VISUAL_MODEL_FACEKEYPOINT_NAME | 人脸关键点模型名称 | InsightCoordFaceKeyPoint | InsightCoordFaceKeyPoint |
|
||||
| VISUAL_MODEL_FACEALIGNMENT_NAME | 人脸对齐模型名称 | Simple106pFaceAlignment | Simple106pFaceAlignment,Simple005pFaceAlignment |
|
||||
| VISUAL_MODEL_FACERECOGNITION_NAME | 人脸特征提取模型名称 | InsightArcFaceRecognition | InsightArcFaceRecognition,SeetaFaceOpenRecognition |
|
||||
|
||||
### 性能优化
|
||||
|
||||
* 项目中为了提高人脸的检出率,使用了主要和次要的人脸检测模型,目前实现了两种人脸检测模型insightface和PCN,在docker的服务中,默认主服务为PCN,备用服务为insightface。insightface的效率高,但针对于旋转了大角度的人脸检出率不高,而pcn则可以识别大角度旋转的图片,但效率低一些。若图像均为正脸的图像,建议使用insightface为主模型,pcn为备用模型,如何切换,请查看部署参数。
|
||||
* 项目中为了提高人脸的检出率,使用了主要和次要的人脸检测模型,目前实现了两种人脸检测模型Insightface和PCN,在docker的服务中,默认主服务为Insightface,备用服务为PCN。insightface的效率高,但针对于旋转了大角度的人脸检出率不高,而pcn则可以识别大角度旋转的图片,但效率低一些。若图像均为正脸的图像,建议使用insightface为主模型,pcn为备用模型,如何切换,请查看部署参数。
|
||||
|
||||
### 项目演示
|
||||
|
||||
* 2.0.0 测试用例(做了优化,增强了搜索结果的区分度):face-search-test[测试用例-FaceSearchExample](https://gitee.com/open-visual/face-search/blob/master/face-search-test/src/main/java/com/visual/face/search/valid/exps/FaceSearchExample.java)
|
||||
* 2.1.0 测试用例:face-search-test[测试用例-FaceSearchExample](https://gitee.com/open-visual/face-search/blob/master/face-search-test/src/main/java/com/visual/face/search/valid/exps/FaceSearchExample.java)
|
||||
|
||||
* 
|
||||
|
||||
### 演员识别(手机打开体验更好)
|
||||
* [http://actor-search.diven.nat300.top](http://actor-search.diven.nat300.top)
|
||||
* [http://actor-search.divenswu.com](http://actor-search.divenswu.com)
|
||||
* 
|
||||
|
||||
|
||||
### 交流群
|
||||
|
||||
* 钉钉交流群
|
||||
* 钉钉交流群(已解散)
|
||||
|
||||
关注微信公众号回复:钉钉群
|
||||
|
||||
|
@ -6,7 +6,7 @@
|
||||
|
||||
<groupId>com.visual.face.search</groupId>
|
||||
<artifactId>face-search-client</artifactId>
|
||||
<version>2.0.1</version>
|
||||
<version>2.1.0</version>
|
||||
|
||||
<properties>
|
||||
<java.version>1.8</java.version>
|
||||
@ -18,7 +18,7 @@
|
||||
<dependency>
|
||||
<groupId>com.alibaba</groupId>
|
||||
<artifactId>fastjson</artifactId>
|
||||
<version>1.2.58</version>
|
||||
<version>1.2.83</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.httpcomponents</groupId>
|
||||
|
@ -5,7 +5,7 @@
|
||||
<parent>
|
||||
<artifactId>face-search</artifactId>
|
||||
<groupId>com.visual.face.search</groupId>
|
||||
<version>2.0.1</version>
|
||||
<version>2.1.0</version>
|
||||
</parent>
|
||||
|
||||
<modelVersion>4.0.0</modelVersion>
|
||||
|
@ -0,0 +1,21 @@
|
||||
package com.visual.face.search.core.base;
|
||||
|
||||
import com.visual.face.search.core.domain.ImageMat;
|
||||
import com.visual.face.search.core.domain.QualityInfo;
|
||||
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
* 人脸关键点检测
|
||||
*/
|
||||
public interface FaceMaskPoint {
|
||||
|
||||
/**
|
||||
* 人脸关键点检测
|
||||
* @param imageMat 图像数据
|
||||
* @param params 参数信息
|
||||
* @return
|
||||
*/
|
||||
QualityInfo.MaskPoints inference(ImageMat imageMat, Map<String, Object> params);
|
||||
|
||||
}
|
@ -137,6 +137,18 @@ public class FaceInfo implements Comparable<FaceInfo>, Serializable {
|
||||
public float distance(Point that){
|
||||
return (float) Math.sqrt(Math.pow((this.x-that.x), 2)+Math.pow((this.y-that.y), 2));
|
||||
}
|
||||
|
||||
/**
|
||||
* 将点进行平移
|
||||
* @param top 向上移动的像素点数
|
||||
* @param bottom 向下移动的像素点数
|
||||
* @param left 向左移动的像素点数
|
||||
* @param right 向右移动的像素点数
|
||||
* @return 平移后的点
|
||||
*/
|
||||
public Point move(int left, int right, int top, int bottom){
|
||||
return new Point(x - left + right, y - top + bottom);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
@ -246,6 +258,22 @@ public class FaceInfo implements Comparable<FaceInfo>, Serializable {
|
||||
}
|
||||
return points;
|
||||
}
|
||||
|
||||
/**
|
||||
* 将点进行平移
|
||||
* @param top 向上移动的像素点数
|
||||
* @param bottom 向下移动的像素点数
|
||||
* @param left 向左移动的像素点数
|
||||
* @param right 向右移动的像素点数
|
||||
* @return 平移后的点
|
||||
*/
|
||||
public Points move(int left, int right, int top, int bottom){
|
||||
Points points = build();
|
||||
for(Point item : this){
|
||||
points.add(item.move(left, right, top, bottom));
|
||||
}
|
||||
return points;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
@ -419,6 +447,23 @@ public class FaceInfo implements Comparable<FaceInfo>, Serializable {
|
||||
new Point(leftBottom.x + change_x_p2_p4, leftBottom.y + change_y_p2_p4)
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* 将框进行平移
|
||||
* @param top 向上移动的像素点数
|
||||
* @param bottom 向下移动的像素点数
|
||||
* @param left 向左移动的像素点数
|
||||
* @param right 向右移动的像素点数
|
||||
* @return 平移后的框
|
||||
*/
|
||||
public FaceBox move(int left, int right, int top, int bottom){
|
||||
return new FaceBox(
|
||||
new Point(leftTop.x - left + right, leftTop.y - top + bottom),
|
||||
new Point(rightTop.x - left + right, rightTop.y - top + bottom),
|
||||
new Point(rightBottom.x - left + right, rightBottom.y - top + bottom),
|
||||
new Point(leftBottom.x - left + right, leftBottom.y - top + bottom)
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -252,6 +252,55 @@ public class ImageMat implements Serializable {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* 对图像进行补边操作,不释放原始的图片
|
||||
* @param top 向上扩展的高度
|
||||
* @param bottom 向下扩展的高度
|
||||
* @param left 向左扩展的宽度
|
||||
* @param right 向右扩展的宽度
|
||||
* @param borderType 补边的类型
|
||||
* @return 补边后的图像
|
||||
*/
|
||||
public ImageMat copyMakeBorderAndNotReleaseMat(int top, int bottom, int left, int right, int borderType){
|
||||
return this.copyMakeBorder(top, bottom, left, right, borderType, false);
|
||||
}
|
||||
|
||||
/**
|
||||
* 对图像进行补边操作,并且释放原始的图片
|
||||
* @param top 向上扩展的高度
|
||||
* @param bottom 向下扩展的高度
|
||||
* @param left 向左扩展的宽度
|
||||
* @param right 向右扩展的宽度
|
||||
* @param borderType 补边的类型
|
||||
* @return 补边后的图像
|
||||
*/
|
||||
public ImageMat copyMakeBorderAndDoReleaseMat(int top, int bottom, int left, int right, int borderType){
|
||||
return this.copyMakeBorder(top, bottom, left, right, borderType, true);
|
||||
}
|
||||
|
||||
/**
|
||||
* 对图像进行补边操作
|
||||
* @param top 向上扩展的高度
|
||||
* @param bottom 向下扩展的高度
|
||||
* @param left 向左扩展的宽度
|
||||
* @param right 向右扩展的宽度
|
||||
* @param borderType 补边的类型
|
||||
* @param release 是否释放原始的图片
|
||||
* @return 补边后的图像
|
||||
*/
|
||||
private ImageMat copyMakeBorder(int top, int bottom, int left, int right, int borderType, boolean release){
|
||||
try {
|
||||
Mat tempMat = new Mat();
|
||||
Core.copyMakeBorder(mat, tempMat, top, bottom, left, right, borderType);
|
||||
return new ImageMat(tempMat);
|
||||
}finally {
|
||||
if(release){
|
||||
this.release();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* 对图像进行预处理,不释放原始图片数据
|
||||
* @param scale 图像各通道数值的缩放比例
|
||||
@ -264,7 +313,7 @@ public class ImageMat implements Serializable {
|
||||
}
|
||||
|
||||
/**
|
||||
* 对图像进行预处理,并释放原始图片数据
|
||||
* 对图像进行预处理,并释放原始图片数据:(先交换RB通道(swapRB),再减法(mean),最后缩放(scale))
|
||||
* @param scale 图像各通道数值的缩放比例
|
||||
* @param mean 用于各通道减去的值,以降低光照的影响
|
||||
* @param swapRB 交换RB通道,默认为False.
|
||||
|
@ -0,0 +1,129 @@
|
||||
package com.visual.face.search.core.domain;
|
||||
|
||||
import java.io.Serializable;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Arrays;
|
||||
|
||||
public class QualityInfo {
|
||||
|
||||
public MaskPoints maskPoints;
|
||||
|
||||
private QualityInfo(MaskPoints maskPoints) {
|
||||
this.maskPoints = maskPoints;
|
||||
}
|
||||
|
||||
public static QualityInfo build(MaskPoints maskPoints){
|
||||
return new QualityInfo(maskPoints);
|
||||
}
|
||||
|
||||
public MaskPoints getMaskPoints() {
|
||||
return maskPoints;
|
||||
}
|
||||
|
||||
|
||||
public boolean isMask(){
|
||||
return null != this.maskPoints && this.maskPoints.isMask();
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* 遮挡类
|
||||
*/
|
||||
public static class Mask implements Serializable {
|
||||
/**遮挡分数*/
|
||||
public float score;
|
||||
|
||||
public static Mask build(float score){
|
||||
return new QualityInfo.Mask(score);
|
||||
}
|
||||
|
||||
private Mask(float score) {
|
||||
this.score = score;
|
||||
}
|
||||
|
||||
public float getScore() {
|
||||
return score;
|
||||
}
|
||||
|
||||
public boolean isMask(){
|
||||
return this.score >= 0.5;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString() {
|
||||
return "Mask{" + "score=" + score + '}';
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* 点遮挡类
|
||||
*/
|
||||
public static class MaskPoint extends Mask{
|
||||
/**坐标X的值**/
|
||||
public float x;
|
||||
/**坐标Y的值**/
|
||||
public float y;
|
||||
|
||||
public static MaskPoint build(float x, float y, float score){
|
||||
return new MaskPoint(x, y, score);
|
||||
}
|
||||
|
||||
private MaskPoint(float x, float y, float score) {
|
||||
super(score);
|
||||
this.x = x;
|
||||
this.y = y;
|
||||
}
|
||||
|
||||
public float getX() {
|
||||
return x;
|
||||
}
|
||||
|
||||
public float getY() {
|
||||
return y;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString() {
|
||||
return "MaskPoint{" + "x=" + x + ", y=" + y + ", score=" + score + '}';
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* 点遮挡类集合
|
||||
*/
|
||||
public static class MaskPoints extends ArrayList<MaskPoint> {
|
||||
|
||||
private MaskPoints(){}
|
||||
|
||||
/**
|
||||
* 构建一个集合
|
||||
* @return
|
||||
*/
|
||||
public static MaskPoints build(){
|
||||
return new MaskPoints();
|
||||
}
|
||||
|
||||
/**
|
||||
* 添加点
|
||||
* @param point
|
||||
* @return
|
||||
*/
|
||||
public MaskPoints add(MaskPoint...point){
|
||||
super.addAll(Arrays.asList(point));
|
||||
return this;
|
||||
}
|
||||
|
||||
/**
|
||||
* 判定是否存在遮挡
|
||||
* @return
|
||||
*/
|
||||
public boolean isMask(){
|
||||
for(MaskPoint point : this){
|
||||
if(point.isMask()){
|
||||
return true;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
}
|
||||
}
|
@ -7,6 +7,9 @@ import com.visual.face.search.core.base.BaseOnnxInfer;
|
||||
import com.visual.face.search.core.base.FaceDetection;
|
||||
import com.visual.face.search.core.domain.FaceInfo;
|
||||
import com.visual.face.search.core.domain.ImageMat;
|
||||
import com.visual.face.search.core.utils.ReleaseUtil;
|
||||
import org.opencv.core.Core;
|
||||
import org.opencv.core.Mat;
|
||||
import org.opencv.core.Scalar;
|
||||
|
||||
import java.util.*;
|
||||
@ -24,6 +27,18 @@ public class InsightScrfdFaceDetection extends BaseOnnxInfer implements FaceDete
|
||||
public final static float defScoreTh = 0.5f;
|
||||
//人脸重叠iou阈值
|
||||
public final static float defIouTh = 0.7f;
|
||||
//给人脸框一个默认的缩放
|
||||
public final static float defBoxScale = 1.0f;
|
||||
//人脸框缩放参数KEY
|
||||
public final static String scrfdFaceboxScaleParamKey = "scrfdFaceboxScale";
|
||||
//人脸框默认需要进行角度检测
|
||||
public final static boolean defNeedCheckFaceAngle = true;
|
||||
//是否需要进行角度检测的参数KEY
|
||||
public final static String scrfdFaceNeedCheckFaceAngleParamKey = "scrfdFaceNeedCheckFaceAngle";
|
||||
//人脸框默认需要进行角度检测
|
||||
public final static boolean defNoFaceImageNeedMakeBorder = true;
|
||||
//是否需要进行角度检测的参数KEY
|
||||
public final static String scrfdNoFaceImageNeedMakeBorderParamKey = "scrfdNoFaceImageNeedMakeBorder";
|
||||
|
||||
/**
|
||||
* 构造函数
|
||||
@ -43,11 +58,46 @@ public class InsightScrfdFaceDetection extends BaseOnnxInfer implements FaceDete
|
||||
*/
|
||||
@Override
|
||||
public List<FaceInfo> inference(ImageMat image, float scoreTh, float iouTh, Map<String, Object> params) {
|
||||
List<FaceInfo> faceInfos = this.modelInference(image, scoreTh,iouTh, params);
|
||||
//对图像进行补边操作,进行二次识别
|
||||
if(this.getNoFaceImageNeedMakeBorder(params) && faceInfos.isEmpty()){
|
||||
//防止由于人脸占用大,导致检测模型识别失败
|
||||
int t = Double.valueOf(image.toCvMat().height() * 0.2).intValue();
|
||||
int b = Double.valueOf(image.toCvMat().height() * 0.2).intValue();
|
||||
int l = Double.valueOf(image.toCvMat().width() * 0.2).intValue();
|
||||
int r = Double.valueOf(image.toCvMat().width() * 0.2).intValue();
|
||||
ImageMat tempMat = null;
|
||||
try {
|
||||
//补边识别
|
||||
tempMat=image.copyMakeBorderAndNotReleaseMat(t, b, l, r, Core.BORDER_CONSTANT);
|
||||
faceInfos = this.modelInference(tempMat, scoreTh,iouTh, params);
|
||||
for(FaceInfo faceInfo : faceInfos){
|
||||
//还原原始的坐标
|
||||
faceInfo.box = faceInfo.box.move(l, 0, t, 0);
|
||||
faceInfo.points = faceInfo.points.move(l, 0, t, 0);
|
||||
}
|
||||
}finally {
|
||||
ReleaseUtil.release(tempMat);
|
||||
}
|
||||
}
|
||||
return faceInfos;
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* 模型推理,获取人脸信息
|
||||
* @param image 图像信息
|
||||
* @param scoreTh 人脸人数阈值
|
||||
* @param iouTh 人脸iou阈值
|
||||
* @return 人脸模型
|
||||
*/
|
||||
public List<FaceInfo> modelInference(ImageMat image, float scoreTh, float iouTh, Map<String, Object> params) {
|
||||
OnnxTensor tensor = null;
|
||||
OrtSession.Result output = null;
|
||||
ImageMat imageMat = image.clone();
|
||||
try {
|
||||
float imgScale = 1.0f;
|
||||
float boxScale = getBoxScale(params);
|
||||
iouTh = iouTh <= 0 ? defIouTh : iouTh;
|
||||
scoreTh = scoreTh <= 0 ? defScoreTh : scoreTh;
|
||||
int imageWidth = imageMat.getWidth(), imageHeight = imageMat.getHeight();
|
||||
@ -68,7 +118,10 @@ public class InsightScrfdFaceDetection extends BaseOnnxInfer implements FaceDete
|
||||
.blobFromImageAndDoReleaseMat(1.0/128, new Scalar(127.5, 127.5, 127.5), true)
|
||||
.to4dFloatOnnxTensorAndDoReleaseMat(true);
|
||||
output = getSession().run(Collections.singletonMap(getInputName(), tensor));
|
||||
return fitterBoxes(output, scoreTh, iouTh, tensor.getInfo().getShape()[3], imgScale);
|
||||
//获取人脸信息
|
||||
List<FaceInfo> faceInfos = fitterBoxes(output, scoreTh, iouTh, tensor.getInfo().getShape()[3], imgScale, boxScale);
|
||||
//对人脸进行角度检查
|
||||
return this.checkFaceAngle(faceInfos, this.getNeedCheckFaceAngle(params));
|
||||
} catch (Exception e) {
|
||||
throw new RuntimeException(e);
|
||||
}finally {
|
||||
@ -94,7 +147,7 @@ public class InsightScrfdFaceDetection extends BaseOnnxInfer implements FaceDete
|
||||
* @return
|
||||
* @throws OrtException
|
||||
*/
|
||||
private List<FaceInfo> fitterBoxes(OrtSession.Result output, float scoreTh, float iouTh, long tensorWidth, float imgScale) throws OrtException {
|
||||
private List<FaceInfo> fitterBoxes(OrtSession.Result output, float scoreTh, float iouTh, long tensorWidth, float imgScale, float boxScale) throws OrtException {
|
||||
//分数过滤及计算正确的人脸框值
|
||||
List<FaceInfo> faceInfos = new ArrayList<>();
|
||||
for(int index=0; index< 3; index++) {
|
||||
@ -122,7 +175,7 @@ public class InsightScrfdFaceDetection extends BaseOnnxInfer implements FaceDete
|
||||
float pointY = (point[2*pointIndex+1] * strides[index] + anchorY) * imgScale;
|
||||
keyPoints.add(FaceInfo.Point.build(pointX, pointY));
|
||||
}
|
||||
faceInfos.add(FaceInfo.build(scores[i][0], 0, FaceInfo.FaceBox.build(x1,y1,x2,y2), keyPoints));
|
||||
faceInfos.add(FaceInfo.build(scores[i][0], 0, FaceInfo.FaceBox.build(x1,y1,x2,y2).scaling(boxScale), keyPoints));
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -147,4 +200,117 @@ public class InsightScrfdFaceDetection extends BaseOnnxInfer implements FaceDete
|
||||
return faces;
|
||||
}
|
||||
|
||||
/**
|
||||
* 对人脸进行角度检测,这里通过5个关键点来确定当前人脸的角度
|
||||
* @param faceInfos 人脸信息
|
||||
* @param needCheckFaceAngle 是否启用检测
|
||||
* @return
|
||||
*/
|
||||
private List<FaceInfo> checkFaceAngle(List<FaceInfo> faceInfos, boolean needCheckFaceAngle){
|
||||
if(!needCheckFaceAngle || null == faceInfos || faceInfos.isEmpty()){
|
||||
return faceInfos;
|
||||
}
|
||||
for(FaceInfo faceInfo : faceInfos){
|
||||
//计算当前人脸的角度数据
|
||||
float ax1 = faceInfo.points.get(1).x;
|
||||
float ay1 = faceInfo.points.get(1).y;
|
||||
float ax2 = faceInfo.points.get(0).x;
|
||||
float ay2 = faceInfo.points.get(0).y;
|
||||
int atan = Double.valueOf(Math.atan2((ay2-ay1), (ax2-ax1)) / Math.PI * 180).intValue();
|
||||
int angle = (180 - atan + 360) % 360;
|
||||
int ki = (angle + 45) % 360 / 90;
|
||||
int rotate = angle - (90 * ki); //
|
||||
float scaling = 1 + Double.valueOf(Math.abs(Math.sin(Math.toRadians(rotate)))).floatValue() / 3;
|
||||
faceInfo.angle = angle;
|
||||
//重组坐标点, 旋转及缩放
|
||||
if(ki == 0){
|
||||
FaceInfo.Point leftTop = FaceInfo.Point.build(faceInfo.box.x1(),faceInfo.box.y1());
|
||||
FaceInfo.Point rightTop = FaceInfo.Point.build(faceInfo.box.x2(),faceInfo.box.y1());
|
||||
FaceInfo.Point rightBottom = FaceInfo.Point.build(faceInfo.box.x2(),faceInfo.box.y2());
|
||||
FaceInfo.Point leftBottom = FaceInfo.Point.build(faceInfo.box.x1(),faceInfo.box.y2());
|
||||
faceInfo.box = new FaceInfo.FaceBox(leftTop, rightTop, rightBottom, leftBottom);
|
||||
faceInfo.box = faceInfo.box.rotate(rotate).scaling(scaling).rotate(-angle);
|
||||
}else if(ki == 1){
|
||||
FaceInfo.Point leftTop = FaceInfo.Point.build(faceInfo.box.x1(),faceInfo.box.y2());
|
||||
FaceInfo.Point rightTop = FaceInfo.Point.build(faceInfo.box.x1(),faceInfo.box.y1());
|
||||
FaceInfo.Point rightBottom = FaceInfo.Point.build(faceInfo.box.x2(),faceInfo.box.y1());
|
||||
FaceInfo.Point leftBottom = FaceInfo.Point.build(faceInfo.box.x2(),faceInfo.box.y2());
|
||||
faceInfo.box = new FaceInfo.FaceBox(leftTop, rightTop, rightBottom, leftBottom);
|
||||
faceInfo.box = faceInfo.box.rotate(rotate).scaling(scaling).rotate(-angle);
|
||||
}else if(ki == 2){
|
||||
FaceInfo.Point leftTop = FaceInfo.Point.build(faceInfo.box.x2(),faceInfo.box.y2());
|
||||
FaceInfo.Point rightTop = FaceInfo.Point.build(faceInfo.box.x1(),faceInfo.box.y2());
|
||||
FaceInfo.Point rightBottom = FaceInfo.Point.build(faceInfo.box.x1(),faceInfo.box.y1());
|
||||
FaceInfo.Point leftBottom = FaceInfo.Point.build(faceInfo.box.x2(),faceInfo.box.y1());
|
||||
faceInfo.box = new FaceInfo.FaceBox(leftTop, rightTop, rightBottom, leftBottom);
|
||||
faceInfo.box = faceInfo.box.rotate(rotate).scaling(scaling).rotate(-angle);
|
||||
}else if(ki == 3){
|
||||
FaceInfo.Point leftTop = FaceInfo.Point.build(faceInfo.box.x2(),faceInfo.box.y1());
|
||||
FaceInfo.Point rightTop = FaceInfo.Point.build(faceInfo.box.x2(),faceInfo.box.y2());
|
||||
FaceInfo.Point rightBottom = FaceInfo.Point.build(faceInfo.box.x1(),faceInfo.box.y2());
|
||||
FaceInfo.Point leftBottom = FaceInfo.Point.build(faceInfo.box.x1(),faceInfo.box.y1());
|
||||
faceInfo.box = new FaceInfo.FaceBox(leftTop, rightTop, rightBottom, leftBottom);
|
||||
faceInfo.box = faceInfo.box.rotate(rotate).scaling(scaling).rotate(-angle);
|
||||
}
|
||||
}
|
||||
return faceInfos;
|
||||
}
|
||||
|
||||
/**人脸框的默认缩放比例**/
|
||||
private float getBoxScale(Map<String, Object> params){
|
||||
float boxScale = 0;
|
||||
try {
|
||||
if(null != params && params.containsKey(scrfdFaceboxScaleParamKey)){
|
||||
Object value = params.get(scrfdFaceboxScaleParamKey);
|
||||
if(null != value){
|
||||
if (value instanceof Number){
|
||||
boxScale = ((Number) value).floatValue();
|
||||
}else{
|
||||
boxScale = Float.parseFloat(value.toString());
|
||||
}
|
||||
}
|
||||
}
|
||||
}catch (Exception e){}
|
||||
return boxScale > 0 ? boxScale : defBoxScale;
|
||||
}
|
||||
|
||||
/**获取是否需要进行角度探测**/
|
||||
private boolean getNeedCheckFaceAngle(Map<String, Object> params){
|
||||
boolean needCheckFaceAngle = defNeedCheckFaceAngle;
|
||||
try {
|
||||
if(null != params && params.containsKey(scrfdFaceNeedCheckFaceAngleParamKey)){
|
||||
Object value = params.get(scrfdFaceNeedCheckFaceAngleParamKey);
|
||||
if(null != value){
|
||||
if (value instanceof Boolean){
|
||||
needCheckFaceAngle = (boolean) value;
|
||||
}else{
|
||||
needCheckFaceAngle = Boolean.parseBoolean(value.toString());
|
||||
}
|
||||
}
|
||||
}
|
||||
}catch (Exception e){
|
||||
e.printStackTrace();
|
||||
}
|
||||
return needCheckFaceAngle;
|
||||
}
|
||||
|
||||
/**获取是否需要对没有检测到人脸的图像进行补边二次识别**/
|
||||
private boolean getNoFaceImageNeedMakeBorder(Map<String, Object> params){
|
||||
boolean noFaceImageNeedMakeBorder = defNoFaceImageNeedMakeBorder;
|
||||
try {
|
||||
if(null != params && params.containsKey(scrfdNoFaceImageNeedMakeBorderParamKey)){
|
||||
Object value = params.get(scrfdNoFaceImageNeedMakeBorderParamKey);
|
||||
if(null != value){
|
||||
if (value instanceof Boolean){
|
||||
noFaceImageNeedMakeBorder = (boolean) value;
|
||||
}else{
|
||||
noFaceImageNeedMakeBorder = Boolean.parseBoolean(value.toString());
|
||||
}
|
||||
}
|
||||
}
|
||||
}catch (Exception e){
|
||||
e.printStackTrace();
|
||||
}
|
||||
return noFaceImageNeedMakeBorder;
|
||||
}
|
||||
}
|
||||
|
@ -0,0 +1,59 @@
|
||||
package com.visual.face.search.core.models;
|
||||
|
||||
import ai.onnxruntime.OnnxTensor;
|
||||
import ai.onnxruntime.OrtSession;
|
||||
import com.visual.face.search.core.base.BaseOnnxInfer;
|
||||
import com.visual.face.search.core.base.FaceRecognition;
|
||||
import com.visual.face.search.core.domain.FaceInfo.Embedding;
|
||||
import com.visual.face.search.core.domain.ImageMat;
|
||||
import com.visual.face.search.core.utils.ArrayUtil;
|
||||
import org.opencv.core.Scalar;
|
||||
import java.util.Collections;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
* 人脸识别-人脸特征提取
|
||||
* git:https://github.com/SeetaFace6Open/index
|
||||
*/
|
||||
public class SeetaFaceOpenRecognition extends BaseOnnxInfer implements FaceRecognition {
|
||||
|
||||
/**
|
||||
* 构造函数
|
||||
* @param modelPath 模型路径
|
||||
* @param threads 线程数
|
||||
*/
|
||||
public SeetaFaceOpenRecognition(String modelPath, int threads) {
|
||||
super(modelPath, threads);
|
||||
}
|
||||
|
||||
/**
|
||||
* 人脸识别,人脸特征向量
|
||||
* @param image 图像信息
|
||||
* @return
|
||||
*/
|
||||
@Override
|
||||
public Embedding inference(ImageMat image, Map<String, Object> params) {
|
||||
OnnxTensor tensor = null;
|
||||
OrtSession.Result output = null;
|
||||
try {
|
||||
tensor = image.resizeAndNoReleaseMat(112,112)
|
||||
.blobFromImageAndDoReleaseMat(1.0/255, new Scalar(0, 0, 0), false)
|
||||
.to4dFloatOnnxTensorAndDoReleaseMat(true);
|
||||
output = getSession().run(Collections.singletonMap(getInputName(), tensor));
|
||||
float[] embeds = ((float[][]) output.get(0).getValue())[0];
|
||||
double normValue = ArrayUtil.matrixNorm(embeds);
|
||||
float[] embedding = ArrayUtil.division(embeds, Double.valueOf(normValue).floatValue());
|
||||
return Embedding.build(image.toBase64AndNoReleaseMat(), embedding);
|
||||
} catch (Exception e) {
|
||||
throw new RuntimeException(e);
|
||||
}finally {
|
||||
if(null != tensor){
|
||||
tensor.close();
|
||||
}
|
||||
if(null != output){
|
||||
output.close();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
}
|
@ -0,0 +1,94 @@
|
||||
package com.visual.face.search.core.models;
|
||||
|
||||
import ai.onnxruntime.OnnxTensor;
|
||||
import ai.onnxruntime.OrtSession;
|
||||
import com.visual.face.search.core.base.BaseOnnxInfer;
|
||||
import com.visual.face.search.core.base.FaceMaskPoint;
|
||||
import com.visual.face.search.core.domain.ImageMat;
|
||||
import com.visual.face.search.core.domain.QualityInfo;
|
||||
import com.visual.face.search.core.utils.SoftMaxUtil;
|
||||
import org.opencv.core.*;
|
||||
import org.opencv.imgproc.Imgproc;
|
||||
|
||||
import java.util.Collections;
|
||||
import java.util.Map;
|
||||
|
||||
public class SeetaMaskFaceKeyPoint extends BaseOnnxInfer implements FaceMaskPoint {
|
||||
|
||||
private static final int stride = 8;
|
||||
private static final int shape = 128;
|
||||
|
||||
/**
|
||||
* 构造函数
|
||||
* @param modelPath 模型路径
|
||||
* @param threads 线程数
|
||||
*/
|
||||
public SeetaMaskFaceKeyPoint(String modelPath, int threads) {
|
||||
super(modelPath, threads);
|
||||
}
|
||||
|
||||
/**
|
||||
* 人脸关键点检测
|
||||
*
|
||||
* @param imageMat 图像数据
|
||||
* @param params 参数信息
|
||||
* @return
|
||||
*/
|
||||
@Override
|
||||
public QualityInfo.MaskPoints inference(ImageMat imageMat, Map<String, Object> params) {
|
||||
Mat borderMat = null;
|
||||
Mat resizeMat = null;
|
||||
OnnxTensor tensor = null;
|
||||
OrtSession.Result output = null;
|
||||
try {
|
||||
Mat image = imageMat.toCvMat();
|
||||
//将图片转换为正方形
|
||||
int w = imageMat.getWidth();
|
||||
int h = imageMat.getHeight();
|
||||
int new_w = Math.max(h, w);
|
||||
int new_h = Math.max(h, w);
|
||||
if (Math.max(h, w) % stride != 0){
|
||||
new_w = new_w + (stride - Math.max(h, w) % stride);
|
||||
new_h = new_h + (stride - Math.max(h, w) % stride);
|
||||
}
|
||||
int ow = (new_w - w) / 2;
|
||||
int oh = (new_h - h) / 2;
|
||||
borderMat = new Mat();
|
||||
Core.copyMakeBorder(image, borderMat, oh, oh, ow, ow, Core.BORDER_CONSTANT, new Scalar(114, 114, 114));
|
||||
//对图片进行resize
|
||||
float ratio = 1.0f * shape / new_h;
|
||||
resizeMat = new Mat();
|
||||
Imgproc.resize(borderMat, resizeMat, new Size(shape, shape));
|
||||
//模型推理
|
||||
tensor = ImageMat.fromCVMat(resizeMat)
|
||||
.blobFromImageAndDoReleaseMat(1.0/32, new Scalar(104, 117, 123), false)
|
||||
.to4dFloatOnnxTensorAndDoReleaseMat(true);
|
||||
output = this.getSession().run(Collections.singletonMap(this.getInputName(), tensor));
|
||||
float[] value = ((float[][]) output.get(0).getValue())[0];
|
||||
//转换为标准的坐标点
|
||||
QualityInfo.MaskPoints pointList = QualityInfo.MaskPoints.build();
|
||||
for(int i=0; i<5; i++){
|
||||
float x = value[i * 4 + 0] / ratio * 128 - ow;
|
||||
float y = value[i * 4 + 1] / ratio * 128 - oh;
|
||||
double[] softMax = SoftMaxUtil.softMax(new double[]{value[i * 4 + 2], value[i * 4 + 3]});
|
||||
pointList.add(QualityInfo.MaskPoint.build(x, y, Double.valueOf(softMax[1]).floatValue()));
|
||||
}
|
||||
return pointList;
|
||||
} catch (Exception e) {
|
||||
throw new RuntimeException(e);
|
||||
}finally {
|
||||
if(null != tensor){
|
||||
tensor.close();
|
||||
}
|
||||
if(null != output){
|
||||
output.close();
|
||||
}
|
||||
if(null != borderMat){
|
||||
borderMat.release();
|
||||
}
|
||||
if(null != resizeMat){
|
||||
resizeMat.release();
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
@ -12,6 +12,8 @@ import java.util.Map;
|
||||
* 五点对齐法
|
||||
*/
|
||||
public class Simple005pFaceAlignment implements FaceAlignment {
|
||||
/**最小边的长度**/
|
||||
private final static float minEdgeLength = 128;
|
||||
|
||||
/**对齐矩阵**/
|
||||
private final static double[][] dst_points = new double[][]{
|
||||
@ -31,16 +33,33 @@ public class Simple005pFaceAlignment implements FaceAlignment {
|
||||
*/
|
||||
@Override
|
||||
public ImageMat inference(ImageMat imageMat, FaceInfo.Points imagePoint, Map<String, Object> params) {
|
||||
double [][] image_points;
|
||||
if(imagePoint.size() == 5){
|
||||
image_points = imagePoint.toDoubleArray();
|
||||
}else if(imagePoint.size() == 106){
|
||||
image_points = imagePoint.select(38, 88, 80, 52, 61).toDoubleArray();
|
||||
}else{
|
||||
throw new RuntimeException("need 5 point, but get "+ imagePoint.size());
|
||||
ImageMat alignmentImageMat = null;
|
||||
try {
|
||||
FaceInfo.Points alignmentPoints = imagePoint;
|
||||
if(imageMat.getWidth() < minEdgeLength || imageMat.getHeight() < minEdgeLength){
|
||||
float scale = minEdgeLength / Math.min(imageMat.getWidth(), imageMat.getHeight());
|
||||
int newWidth = Float.valueOf(imageMat.getWidth() * scale).intValue();
|
||||
int newHeight = Float.valueOf(imageMat.getHeight() * scale).intValue();
|
||||
alignmentImageMat = imageMat.resizeAndNoReleaseMat(newWidth, newHeight);
|
||||
alignmentPoints = imagePoint.operateMultiply(scale);
|
||||
}else{
|
||||
alignmentImageMat = imageMat.clone();
|
||||
}
|
||||
double [][] image_points;
|
||||
if(alignmentPoints.size() == 5){
|
||||
image_points = alignmentPoints.toDoubleArray();
|
||||
}else if(alignmentPoints.size() == 106){
|
||||
image_points = alignmentPoints.select(38, 88, 80, 52, 61).toDoubleArray();
|
||||
}else{
|
||||
throw new RuntimeException("need 5 point, but get "+ imagePoint.size());
|
||||
}
|
||||
Mat alignMat = AlignUtil.alignedImage(alignmentImageMat.toCvMat(), image_points, 112, 112, dst_points);
|
||||
return ImageMat.fromCVMat(alignMat);
|
||||
}finally {
|
||||
if(null != alignmentImageMat){
|
||||
alignmentImageMat.release();
|
||||
}
|
||||
}
|
||||
Mat alignMat = AlignUtil.alignedImage(imageMat.toCvMat(), image_points, 112, 112, dst_points);
|
||||
return ImageMat.fromCVMat(alignMat);
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -10,6 +10,9 @@ import java.util.Map;
|
||||
|
||||
public class Simple106pFaceAlignment implements FaceAlignment {
|
||||
|
||||
/**最小边的长度**/
|
||||
private final static float minEdgeLength = 128;
|
||||
|
||||
/**矫正的偏移**/
|
||||
private final static double x_offset = 0;
|
||||
private final static double y_offset = -8;
|
||||
@ -125,14 +128,31 @@ public class Simple106pFaceAlignment implements FaceAlignment {
|
||||
|
||||
@Override
|
||||
public ImageMat inference(ImageMat imageMat, FaceInfo.Points imagePoint, Map<String, Object> params) {
|
||||
double [][] image_points;
|
||||
if(imagePoint.size() == 106){
|
||||
image_points = imagePoint.toDoubleArray();
|
||||
}else{
|
||||
throw new RuntimeException("need 106 point, but get "+ imagePoint.size());
|
||||
ImageMat alignmentImageMat = null;
|
||||
try {
|
||||
FaceInfo.Points alignmentPoints = imagePoint;
|
||||
if(imageMat.getWidth() < minEdgeLength || imageMat.getHeight() < minEdgeLength){
|
||||
float scale = minEdgeLength / Math.min(imageMat.getWidth(), imageMat.getHeight());
|
||||
int newWidth = Float.valueOf(imageMat.getWidth() * scale).intValue();
|
||||
int newHeight = Float.valueOf(imageMat.getHeight() * scale).intValue();
|
||||
alignmentImageMat = imageMat.resizeAndNoReleaseMat(newWidth, newHeight);
|
||||
alignmentPoints = imagePoint.operateMultiply(scale);
|
||||
}else{
|
||||
alignmentImageMat = imageMat.clone();
|
||||
}
|
||||
double [][] image_points;
|
||||
if(alignmentPoints.size() == 106){
|
||||
image_points = alignmentPoints.toDoubleArray();
|
||||
}else{
|
||||
throw new RuntimeException("need 106 point, but get "+ alignmentPoints.size());
|
||||
}
|
||||
Mat alignMat = AlignUtil.alignedImage(alignmentImageMat.toCvMat(), image_points, 112, 112, dst_points);
|
||||
return ImageMat.fromCVMat(alignMat);
|
||||
}finally {
|
||||
if(null != alignmentImageMat){
|
||||
alignmentImageMat.release();
|
||||
}
|
||||
}
|
||||
Mat alignMat = AlignUtil.alignedImage(imageMat.toCvMat(), image_points, 112, 112, dst_points);
|
||||
return ImageMat.fromCVMat(alignMat);
|
||||
}
|
||||
|
||||
}
|
||||
|
BIN
face-search-core/src/main/resources/model/onnx/detection_face_scrfd/scrfd_500m_bnkps.onnx
Executable file → Normal file
@ -0,0 +1,80 @@
|
||||
package com.visual.face.search.core.test.extract;
|
||||
|
||||
import com.visual.face.search.core.base.*;
|
||||
import com.visual.face.search.core.domain.ExtParam;
|
||||
import com.visual.face.search.core.domain.FaceImage;
|
||||
import com.visual.face.search.core.domain.FaceInfo;
|
||||
import com.visual.face.search.core.domain.ImageMat;
|
||||
import com.visual.face.search.core.extract.FaceFeatureExtractor;
|
||||
import com.visual.face.search.core.extract.FaceFeatureExtractorImpl;
|
||||
import com.visual.face.search.core.models.*;
|
||||
import com.visual.face.search.core.test.base.BaseTest;
|
||||
import com.visual.face.search.core.utils.Similarity;
|
||||
import org.opencv.core.Mat;
|
||||
import org.opencv.imgcodecs.Imgcodecs;
|
||||
|
||||
import java.util.Arrays;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
public class FaceCompareTest extends BaseTest {
|
||||
|
||||
private static String modelPcn1Path = "face-search-core/src/main/resources/model/onnx/detection_face_pcn/pcn1_sd.onnx";
|
||||
private static String modelPcn2Path = "face-search-core/src/main/resources/model/onnx/detection_face_pcn/pcn2_sd.onnx";
|
||||
private static String modelPcn3Path = "face-search-core/src/main/resources/model/onnx/detection_face_pcn/pcn3_sd.onnx";
|
||||
private static String modelScrfdPath = "face-search-core/src/main/resources/model/onnx/detection_face_scrfd/scrfd_500m_bnkps.onnx";
|
||||
private static String modelCoordPath = "face-search-core/src/main/resources/model/onnx/keypoint_coordinate/coordinate_106_mobilenet_05.onnx";
|
||||
private static String modelArcPath = "face-search-core/src/main/resources/model/onnx/recognition_face_arc/glint360k_cosface_r18_fp16_0.1.onnx";
|
||||
private static String modelSeetaPath = "face-search-core/src/main/resources/model/onnx/recognition_face_seeta/face_recognizer_512.onnx";
|
||||
private static String modelArrPath = "face-search-core/src/main/resources/model/onnx/attribute_gender_age/insight_gender_age.onnx";
|
||||
|
||||
private static String imagePath = "face-search-test/src/main/resources/image/validate/index/马化腾/";
|
||||
private static String imagePath3 = "face-search-test/src/main/resources/image/validate/index/雷军/";
|
||||
// private static String imagePath1 = "face-search-core/src/test/resources/images/faces/debug/debug_0001.jpg";
|
||||
// private static String imagePath2 = "face-search-core/src/test/resources/images/faces/debug/debug_0001.jpg";
|
||||
// private static String imagePath1 = "face-search-core/src/test/resources/images/faces/compare/1682052661610.jpg";
|
||||
// private static String imagePath2 = "face-search-core/src/test/resources/images/faces/compare/1682052669004.jpg";
|
||||
// private static String imagePath2 = "face-search-core/src/test/resources/images/faces/compare/1682053163961.jpg";
|
||||
// private static String imagePath1 = "face-search-test/src/main/resources/image/validate/index/张一鸣/1c7abcaf2dabdd2bc08e90c224d4c381.jpeg";
|
||||
private static String imagePath1 = "face-search-core/src/test/resources/images/faces/small/1.png";
|
||||
private static String imagePath2 = "face-search-core/src/test/resources/images/faces/small/2.png";
|
||||
public static void main(String[] args) {
|
||||
//口罩模型0.48,light模型0.52,normal模型0.62
|
||||
Map<String, String> map1 = getImagePathMap(imagePath1);
|
||||
Map<String, String> map2 = getImagePathMap(imagePath2);
|
||||
FaceDetection insightScrfdFaceDetection = new InsightScrfdFaceDetection(modelScrfdPath, 1);
|
||||
FaceKeyPoint insightCoordFaceKeyPoint = new InsightCoordFaceKeyPoint(modelCoordPath, 1);
|
||||
FaceRecognition insightArcFaceRecognition = new InsightArcFaceRecognition(modelArcPath, 1);
|
||||
FaceRecognition insightSeetaFaceRecognition = new SeetaFaceOpenRecognition(modelSeetaPath, 1);
|
||||
FaceAlignment simple005pFaceAlignment = new Simple005pFaceAlignment();
|
||||
FaceAlignment simple106pFaceAlignment = new Simple106pFaceAlignment();
|
||||
FaceDetection pcnNetworkFaceDetection = new PcnNetworkFaceDetection(new String[]{modelPcn1Path, modelPcn2Path, modelPcn3Path}, 1);
|
||||
FaceAttribute insightFaceAttribute = new InsightAttributeDetection(modelArrPath, 1);
|
||||
|
||||
FaceFeatureExtractor extractor = new FaceFeatureExtractorImpl(
|
||||
insightScrfdFaceDetection, pcnNetworkFaceDetection, insightCoordFaceKeyPoint,
|
||||
simple005pFaceAlignment, insightSeetaFaceRecognition, insightFaceAttribute);
|
||||
|
||||
for(String file1 : map1.keySet()){
|
||||
for(String file2 : map2.keySet()){
|
||||
Mat image1 = Imgcodecs.imread(map1.get(file1));
|
||||
long s = System.currentTimeMillis();
|
||||
ExtParam extParam = ExtParam.build().setMask(false).setTopK(20).setScoreTh(0).setIouTh(0);
|
||||
FaceImage faceImage1 = extractor.extract(ImageMat.fromCVMat(image1), extParam, null);
|
||||
List<FaceInfo> faceInfos1 = faceImage1.faceInfos();
|
||||
long e = System.currentTimeMillis();
|
||||
System.out.println("image1 extract cost:"+(e-s)+"ms");;
|
||||
|
||||
Mat image2 = Imgcodecs.imread(map2.get(file2));
|
||||
s = System.currentTimeMillis();
|
||||
FaceImage faceImage2 = extractor.extract(ImageMat.fromCVMat(image2), extParam, null);
|
||||
List<FaceInfo> faceInfos2 = faceImage2.faceInfos();
|
||||
e = System.currentTimeMillis();
|
||||
System.out.println("image2 extract cost:"+(e-s)+"ms");
|
||||
float similarity = Similarity.cosineSimilarityNorm(faceInfos1.get(0).embedding.embeds, faceInfos2.get(0).embedding.embeds);
|
||||
System.out.println(file1 + ","+ file2 + ",face similarity="+similarity);
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
}
|
@ -1,5 +1,6 @@
|
||||
package com.visual.face.search.core.test.extract;
|
||||
|
||||
import com.alibaba.fastjson.JSONObject;
|
||||
import com.visual.face.search.core.base.*;
|
||||
import com.visual.face.search.core.domain.ExtParam;
|
||||
import com.visual.face.search.core.domain.FaceImage;
|
||||
@ -29,8 +30,9 @@ public class FaceFeatureExtractTest extends BaseTest {
|
||||
private static String modelArcPath = "face-search-core/src/main/resources/model/onnx/recognition_face_arc/glint360k_cosface_r18_fp16_0.1.onnx";
|
||||
private static String modelArrPath = "face-search-core/src/main/resources/model/onnx/attribute_gender_age/insight_gender_age.onnx";
|
||||
|
||||
// private static String imagePath = "face-search-core/src/test/resources/images/faces";
|
||||
private static String imagePath = "face-search-core/src/test/resources/images/faces/debug/debug_0001.jpg";
|
||||
private static String imagePath = "face-search-core/src/test/resources/images/faces";
|
||||
// private static String imagePath = "face-search-core/src/test/resources/images/faces/debug/debug_0001.jpg";
|
||||
// private static String imagePath = "face-search-core/src/test/resources/images/faces/rotate/rotate_0002.jpg";
|
||||
|
||||
|
||||
public static void main(String[] args) {
|
||||
@ -44,7 +46,7 @@ public class FaceFeatureExtractTest extends BaseTest {
|
||||
FaceAttribute insightFaceAttribute = new InsightAttributeDetection(modelArrPath, 1);
|
||||
|
||||
FaceFeatureExtractor extractor = new FaceFeatureExtractorImpl(
|
||||
pcnNetworkFaceDetection, insightScrfdFaceDetection, insightCoordFaceKeyPoint,
|
||||
insightScrfdFaceDetection, pcnNetworkFaceDetection, insightCoordFaceKeyPoint,
|
||||
simple005pFaceAlignment, insightArcFaceRecognition, insightFaceAttribute);
|
||||
for(String fileName : map.keySet()){
|
||||
String imageFilePath = map.get(fileName);
|
||||
@ -56,7 +58,8 @@ public class FaceFeatureExtractTest extends BaseTest {
|
||||
.setTopK(20)
|
||||
.setScoreTh(0)
|
||||
.setIouTh(0);
|
||||
FaceImage faceImage = extractor.extract(ImageMat.fromCVMat(image), extParam, null);
|
||||
Map<String, Object> params = new JSONObject().fluentPut(InsightScrfdFaceDetection.scrfdFaceNeedCheckFaceAngleParamKey, true);
|
||||
FaceImage faceImage = extractor.extract(ImageMat.fromCVMat(image), extParam, params);
|
||||
List<FaceInfo> faceInfos = faceImage.faceInfos();
|
||||
long e = System.currentTimeMillis();
|
||||
System.out.println("fileName="+fileName+",\tcost="+(e-s)+",\t"+faceInfos);
|
||||
@ -67,7 +70,7 @@ public class FaceFeatureExtractTest extends BaseTest {
|
||||
Imgproc.line(image, new Point(box.rightTop.x, box.rightTop.y), new Point(box.rightBottom.x, box.rightBottom.y), new Scalar(255,0,0), 1);
|
||||
Imgproc.line(image, new Point(box.rightBottom.x, box.rightBottom.y), new Point(box.leftBottom.x, box.leftBottom.y), new Scalar(255,0,0), 1);
|
||||
Imgproc.line(image, new Point(box.leftBottom.x, box.leftBottom.y), new Point(box.leftTop.x, box.leftTop.y), new Scalar(255,0,0), 1);
|
||||
Imgproc.putText(image, String.valueOf(faceInfo.angle), new Point(box.leftTop.x, box.leftTop.y), Imgproc.FONT_HERSHEY_PLAIN, 1, new Scalar(0,0,255));
|
||||
Imgproc.putText(image, String.valueOf(faceInfo.angle), new Point(box.leftTop.x, box.leftTop.y+15), Imgproc.FONT_HERSHEY_PLAIN, 1, new Scalar(0,0,255));
|
||||
// Imgproc.rectangle(image, new Point(faceInfo.box.x1(), faceInfo.box.y1()), new Point(faceInfo.box.x2(), faceInfo.box.y2()), new Scalar(255,0,255));
|
||||
|
||||
FaceInfo.FaceBox box1 = faceInfo.rotateFaceBox();
|
||||
|
@ -1,5 +1,6 @@
|
||||
package com.visual.face.search.core.test.models;
|
||||
|
||||
import com.alibaba.fastjson.JSONObject;
|
||||
import com.visual.face.search.core.domain.FaceInfo;
|
||||
import com.visual.face.search.core.domain.ImageMat;
|
||||
import com.visual.face.search.core.models.InsightScrfdFaceDetection;
|
||||
@ -11,15 +12,17 @@ import org.opencv.highgui.HighGui;
|
||||
import org.opencv.imgcodecs.Imgcodecs;
|
||||
import org.opencv.imgproc.Imgproc;
|
||||
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
public class InsightScrfdFaceDetectionTest extends BaseTest {
|
||||
private static String modelPath = "face-search-core/src/main/resources/model/onnx/detection_face_scrfd/scrfd_500m_bnkps.onnx";
|
||||
|
||||
private static String imagePath = "face-search-core/src/test/resources/images/faces";
|
||||
// private static String imagePath = "face-search-core/src/test/resources/images/faces/rotate";
|
||||
// private static String imagePath = "face-search-core/src/test/resources/images/faces/debug";
|
||||
// private static String imagePath = "face-search-core/src/test/resources/images/faces";
|
||||
// private static String imagePath = "face-search-core/src/test/resources/images/faces/rotate/rotate_0001.jpg";
|
||||
private static String imagePath = "face-search-core/src/test/resources/images/faces/rotate";
|
||||
// private static String imagePath = "face-search-core/src/test/resources/images/faces/big/big_002.jpg";
|
||||
|
||||
|
||||
public static void main(String[] args) {
|
||||
@ -31,7 +34,9 @@ public class InsightScrfdFaceDetectionTest extends BaseTest {
|
||||
System.out.println(imageFilePath);
|
||||
Mat image = Imgcodecs.imread(imageFilePath);
|
||||
long s = System.currentTimeMillis();
|
||||
List<FaceInfo> faceInfos = infer.inference(ImageMat.fromCVMat(image), 0.5f, 0.7f, null);
|
||||
Map<String, Object> params = new JSONObject().fluentPut(InsightScrfdFaceDetection.scrfdFaceNeedCheckFaceAngleParamKey, true);
|
||||
|
||||
List<FaceInfo> faceInfos = infer.inference(ImageMat.fromCVMat(image), 0.48f, 0.7f, params);
|
||||
long e = System.currentTimeMillis();
|
||||
if(faceInfos.size() > 0){
|
||||
System.out.println("fileName="+fileName+",\tcost="+(e-s)+",\t"+faceInfos.get(0).score);
|
||||
@ -39,10 +44,20 @@ public class InsightScrfdFaceDetectionTest extends BaseTest {
|
||||
System.out.println("fileName="+fileName+",\tcost="+(e-s)+",\t"+faceInfos);
|
||||
}
|
||||
|
||||
//对坐标进行调整
|
||||
for(FaceInfo faceInfo : faceInfos){
|
||||
Imgproc.rectangle(image, new Point(faceInfo.box.x1(), faceInfo.box.y1()), new Point(faceInfo.box.x2(), faceInfo.box.y2()), new Scalar(0,0,255));
|
||||
FaceInfo.FaceBox box = faceInfo.rotateFaceBox();
|
||||
Imgproc.circle(image, new Point(box.leftTop.x, box.leftTop.y), 3, new Scalar(0,0,255), -1);
|
||||
Imgproc.circle(image, new Point(box.rightBottom.x, box.rightBottom.y), 3, new Scalar(0,0,255), -1);
|
||||
Imgproc.line(image, new Point(box.leftTop.x, box.leftTop.y), new Point(box.rightTop.x, box.rightTop.y), new Scalar(0,0,255), 1);
|
||||
Imgproc.line(image, new Point(box.rightTop.x, box.rightTop.y), new Point(box.rightBottom.x, box.rightBottom.y), new Scalar(255,0,0), 1);
|
||||
Imgproc.line(image, new Point(box.rightBottom.x, box.rightBottom.y), new Point(box.leftBottom.x, box.leftBottom.y), new Scalar(255,0,0), 1);
|
||||
Imgproc.line(image, new Point(box.leftBottom.x, box.leftBottom.y), new Point(box.leftTop.x, box.leftTop.y), new Scalar(255,0,0), 1);
|
||||
Imgproc.putText(image, String.valueOf(faceInfo.angle), new Point(box.leftTop.x, box.leftTop.y), Imgproc.FONT_HERSHEY_PLAIN, 1, new Scalar(0,0,255));
|
||||
|
||||
FaceInfo.Points points = faceInfo.points;
|
||||
int pointNum = 1;
|
||||
for(FaceInfo.Point keyPoint : faceInfo.points){
|
||||
for(FaceInfo.Point keyPoint : points){
|
||||
Imgproc.circle(image, new Point(keyPoint.x, keyPoint.y), 3, new Scalar(0,0,255), -1);
|
||||
Imgproc.putText(image, String.valueOf(pointNum), new Point(keyPoint.x+1, keyPoint.y), Imgproc.FONT_HERSHEY_PLAIN, 1, new Scalar(255,0,0));
|
||||
pointNum ++ ;
|
||||
|
@ -21,7 +21,9 @@ public class PcnNetworkFaceDetectionTest extends BaseTest {
|
||||
private static String model2Path = "face-search-core/src/main/resources/model/onnx/detection_face_pcn/pcn2_sd.onnx";
|
||||
private static String model3Path = "face-search-core/src/main/resources/model/onnx/detection_face_pcn/pcn3_sd.onnx";
|
||||
|
||||
private static String imagePath = "face-search-core/src/test/resources/images/faces";
|
||||
// private static String imagePath = "face-search-core/src/test/resources/images/faces";
|
||||
private static String imagePath = "face-search-core/src/test/resources/images/faces/rotate/rotate_0001.jpg";
|
||||
// private static String imagePath = "face-search-core/src/test/resources/images/faces/big/big_002.jpg";
|
||||
// private static String imagePath = "face-search-core/src/test/resources/images/faces/rotate";
|
||||
// private static String imagePath = "face-search-core/src/test/resources/images/faces/debug";
|
||||
|
||||
|
@ -0,0 +1,53 @@
|
||||
package com.visual.face.search.core.test.models;
|
||||
|
||||
import com.visual.face.search.core.base.FaceAlignment;
|
||||
import com.visual.face.search.core.base.FaceKeyPoint;
|
||||
import com.visual.face.search.core.base.FaceRecognition;
|
||||
import com.visual.face.search.core.domain.FaceInfo;
|
||||
import com.visual.face.search.core.domain.ImageMat;
|
||||
import com.visual.face.search.core.models.InsightCoordFaceKeyPoint;
|
||||
import com.visual.face.search.core.models.SeetaFaceOpenRecognition;
|
||||
import com.visual.face.search.core.models.Simple005pFaceAlignment;
|
||||
import com.visual.face.search.core.test.base.BaseTest;
|
||||
import com.visual.face.search.core.utils.CropUtil;
|
||||
import com.visual.face.search.core.utils.Similarity;
|
||||
import org.opencv.core.Mat;
|
||||
import org.opencv.imgcodecs.Imgcodecs;
|
||||
|
||||
import java.util.Arrays;
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
|
||||
public class SeetaFaceOpenRecognitionTest extends BaseTest {
|
||||
private static String modelCoordPath = "face-search-core/src/main/resources/model/onnx/keypoint_coordinate/coordinate_106_mobilenet_05.onnx";
|
||||
private static String modelSeetaPath = "face-search-core/src/main/resources/model/onnx/recognition_fcae_seeta/face_recognizer_512.onnx";
|
||||
// private static String modelSeetaPath = "face-search-core/src/main/resources/model/onnx/recognition_fcae_seeta/face_recognizer_1024.onnx";
|
||||
|
||||
private static String imagePath = "face-search-core/src/test/resources/images/faces";
|
||||
// private static String imagePath1 = "face-search-core/src/test/resources/images/faces/debug/debug_0001.jpg";
|
||||
// private static String imagePath2 = "face-search-core/src/test/resources/images/faces/debug/debug_0004.jpeg";
|
||||
private static String imagePath1 = "face-search-core/src/test/resources/images/faces/compare/1682052661610.jpg";
|
||||
private static String imagePath2 = "face-search-core/src/test/resources/images/faces/compare/1682052669004.jpg";
|
||||
// private static String imagePath2 = "face-search-core/src/test/resources/images/faces/compare/1682053163961.jpg";
|
||||
|
||||
public static void main(String[] args) {
|
||||
FaceAlignment simple005pFaceAlignment = new Simple005pFaceAlignment();
|
||||
FaceKeyPoint insightCoordFaceKeyPoint = new InsightCoordFaceKeyPoint(modelCoordPath, 1);
|
||||
FaceRecognition insightSeetaFaceRecognition = new SeetaFaceOpenRecognition(modelSeetaPath, 1);
|
||||
|
||||
Mat image1 = Imgcodecs.imread(imagePath1);
|
||||
Mat image2 = Imgcodecs.imread(imagePath2);
|
||||
// image1 = CropUtil.crop(image1, FaceInfo.FaceBox.build(54,27,310,380));
|
||||
// image2 = CropUtil.crop(image2, FaceInfo.FaceBox.build(48,13,292,333));
|
||||
// image2 = CropUtil.crop(image2, FaceInfo.FaceBox.build(52,9,235,263));
|
||||
|
||||
// simple005pFaceAlignment.inference()
|
||||
|
||||
FaceInfo.Embedding embedding1 = insightSeetaFaceRecognition.inference(ImageMat.fromCVMat(image1), null);
|
||||
FaceInfo.Embedding embedding2 = insightSeetaFaceRecognition.inference(ImageMat.fromCVMat(image2), null);
|
||||
float similarity = Similarity.cosineSimilarity(embedding1.embeds, embedding2.embeds);
|
||||
System.out.println(similarity);
|
||||
// System.out.println(Arrays.toString(embedding1.embeds));
|
||||
// System.out.println(Arrays.toString(embedding2.embeds));
|
||||
}
|
||||
}
|
@ -0,0 +1,57 @@
|
||||
package com.visual.face.search.core.test.models;
|
||||
|
||||
import com.visual.face.search.core.domain.FaceInfo;
|
||||
import com.visual.face.search.core.domain.ImageMat;
|
||||
import com.visual.face.search.core.domain.QualityInfo;
|
||||
import com.visual.face.search.core.models.InsightCoordFaceKeyPoint;
|
||||
import com.visual.face.search.core.models.InsightScrfdFaceDetection;
|
||||
import com.visual.face.search.core.models.SeetaMaskFaceKeyPoint;
|
||||
import com.visual.face.search.core.test.base.BaseTest;
|
||||
import com.visual.face.search.core.utils.CropUtil;
|
||||
import org.opencv.core.Mat;
|
||||
import org.opencv.core.Point;
|
||||
import org.opencv.core.Scalar;
|
||||
import org.opencv.highgui.HighGui;
|
||||
import org.opencv.imgcodecs.Imgcodecs;
|
||||
import org.opencv.imgproc.Imgproc;
|
||||
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
public class SeetaMaskFaceKeyPointTest extends BaseTest {
|
||||
private static String modelDetectionPath = "face-search-core/src/main/resources/model/onnx/detection_face_scrfd/scrfd_500m_bnkps.onnx";
|
||||
private static String modelKeypointPath = "face-search-core/src/main/resources/model/onnx/keypoint_seeta_mask/landmarker_005_mask_pts5.onnx";
|
||||
private static String imagePath = "face-search-core/src/test/resources/images/faces";
|
||||
// private static String imagePath = "face-search-core/src/test/resources/images/faces/compare";
|
||||
// private static String imagePath = "face-search-core/src/test/resources/images/faces/compare/1694353163955.jpg";
|
||||
|
||||
public static void main(String[] args) {
|
||||
Map<String, String> map = getImagePathMap(imagePath);
|
||||
InsightScrfdFaceDetection detectionInfer = new InsightScrfdFaceDetection(modelDetectionPath, 1);
|
||||
SeetaMaskFaceKeyPoint keyPointInfer = new SeetaMaskFaceKeyPoint(modelKeypointPath, 1);
|
||||
for(String fileName : map.keySet()) {
|
||||
System.out.println(fileName);
|
||||
String imageFilePath = map.get(fileName);
|
||||
Mat image = Imgcodecs.imread(imageFilePath);
|
||||
List<FaceInfo> faceInfos = detectionInfer.inference(ImageMat.fromCVMat(image), 0.5f, 0.7f, null);
|
||||
for(FaceInfo faceInfo : faceInfos){
|
||||
FaceInfo.FaceBox rotateFaceBox = faceInfo.rotateFaceBox();
|
||||
Mat cropFace = CropUtil.crop(image, rotateFaceBox.scaling(1.0f));
|
||||
ImageMat cropImageMat = ImageMat.fromCVMat(cropFace);
|
||||
QualityInfo.MaskPoints maskPoints = keyPointInfer.inference(cropImageMat, null);
|
||||
System.out.println(maskPoints);
|
||||
for(QualityInfo.MaskPoint maskPoint : maskPoints){
|
||||
if(maskPoint.isMask()){
|
||||
Imgproc.circle(cropFace, new Point(maskPoint.x, maskPoint.y), 3, new Scalar(0, 0, 255), -1);
|
||||
}else{
|
||||
Imgproc.circle(cropFace, new Point(maskPoint.x, maskPoint.y), 3, new Scalar(255, 0, 0), -1);
|
||||
}
|
||||
}
|
||||
HighGui.imshow(fileName, cropFace);
|
||||
HighGui.waitKey();
|
||||
}
|
||||
}
|
||||
System.exit(1);
|
||||
}
|
||||
|
||||
}
|
BIN
face-search-core/src/test/resources/images/faces/big/big_001.jpg
Normal file
After Width: | Height: | Size: 32 KiB |
BIN
face-search-core/src/test/resources/images/faces/big/big_002.jpg
Normal file
After Width: | Height: | Size: 122 KiB |
After Width: | Height: | Size: 46 KiB |
After Width: | Height: | Size: 34 KiB |
After Width: | Height: | Size: 23 KiB |
After Width: | Height: | Size: 75 KiB |
After Width: | Height: | Size: 68 KiB |
BIN
face-search-core/src/test/resources/images/faces/small/1.png
Normal file
After Width: | Height: | Size: 616 KiB |
BIN
face-search-core/src/test/resources/images/faces/small/2.png
Normal file
After Width: | Height: | Size: 727 KiB |
@ -5,7 +5,7 @@
|
||||
<parent>
|
||||
<artifactId>face-search</artifactId>
|
||||
<groupId>com.visual.face.search</groupId>
|
||||
<version>2.0.1</version>
|
||||
<version>2.1.0</version>
|
||||
</parent>
|
||||
<modelVersion>4.0.0</modelVersion>
|
||||
|
||||
|
@ -5,7 +5,7 @@
|
||||
<parent>
|
||||
<artifactId>face-search</artifactId>
|
||||
<groupId>com.visual.face.search</groupId>
|
||||
<version>2.0.1</version>
|
||||
<version>2.1.0</version>
|
||||
</parent>
|
||||
<modelVersion>4.0.0</modelVersion>
|
||||
|
||||
|
@ -26,9 +26,9 @@ public class Knife4jConfig {
|
||||
.apiInfo(new ApiInfoBuilder()
|
||||
.title("人脸搜索服务API")
|
||||
.description("人脸搜索服务API")
|
||||
.version("2.0.0")
|
||||
.version("2.1.0")
|
||||
.build())
|
||||
.groupName("2.0.0")
|
||||
.groupName("2.1.0")
|
||||
.select()
|
||||
.apis(RequestHandlerSelectors.basePackage("com.visual.face.search.server.controller.server"))
|
||||
.paths(PathSelectors.any())
|
||||
|
@ -4,6 +4,7 @@ import com.visual.face.search.core.base.*;
|
||||
import com.visual.face.search.core.extract.FaceFeatureExtractor;
|
||||
import com.visual.face.search.core.extract.FaceFeatureExtractorImpl;
|
||||
import com.visual.face.search.core.models.*;
|
||||
import com.visual.face.search.server.utils.StringUtils;
|
||||
import org.springframework.beans.factory.annotation.Qualifier;
|
||||
import org.springframework.beans.factory.annotation.Value;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
@ -12,8 +13,8 @@ import org.springframework.context.annotation.Configuration;
|
||||
@Configuration("visualModelConfig")
|
||||
public class ModelConfig {
|
||||
|
||||
@Value("${spring.profiles.active}")
|
||||
private String profile;
|
||||
@Value("${visual.model.baseModelPath}")
|
||||
private String baseModelPath;
|
||||
|
||||
@Value("${visual.model.faceDetection.name}")
|
||||
private String faceDetectionName;
|
||||
@ -121,6 +122,8 @@ public class ModelConfig {
|
||||
public FaceRecognition getFaceRecognition(){
|
||||
if(faceRecognitionName.equalsIgnoreCase("InsightArcFaceRecognition")){
|
||||
return new InsightArcFaceRecognition(getModelPath(faceRecognitionName, faceRecognitionNameModel)[0], faceRecognitionNameThread);
|
||||
}else if(faceRecognitionName.equalsIgnoreCase("SeetaFaceOpenRecognition")){
|
||||
return new SeetaFaceOpenRecognition(getModelPath(faceRecognitionName, faceRecognitionNameModel)[0], faceRecognitionNameThread);
|
||||
}else{
|
||||
return new InsightArcFaceRecognition(getModelPath(faceRecognitionName, faceRecognitionNameModel)[0], faceRecognitionNameThread);
|
||||
}
|
||||
@ -174,10 +177,9 @@ public class ModelConfig {
|
||||
* @return
|
||||
*/
|
||||
private String[] getModelPath(String modelName, String modelPath[]){
|
||||
|
||||
String basePath = "face-search-core/src/main/resources/";
|
||||
if("docker".equalsIgnoreCase(profile)){
|
||||
basePath = "/app/face-search/";
|
||||
if(StringUtils.isNotEmpty(this.baseModelPath)){
|
||||
basePath = this.baseModelPath.endsWith("/") ? this.baseModelPath : this.baseModelPath +"/";
|
||||
}
|
||||
|
||||
if((null == modelPath || modelPath.length != 3) && "PcnNetworkFaceDetection".equalsIgnoreCase(modelName)){
|
||||
@ -200,10 +202,18 @@ public class ModelConfig {
|
||||
return new String[]{basePath + "model/onnx/recognition_face_arc/glint360k_cosface_r18_fp16_0.1.onnx"};
|
||||
}
|
||||
|
||||
if((null == modelPath || modelPath.length != 1) && "SeetaFaceOpenRecognition".equalsIgnoreCase(modelName)){
|
||||
return new String[]{basePath + "model/onnx/recognition_face_seeta/face_recognizer_512.onnx"};
|
||||
}
|
||||
|
||||
if((null == modelPath || modelPath.length != 1) && "InsightAttributeDetection".equalsIgnoreCase(modelName)){
|
||||
return new String[]{basePath + "model/onnx/attribute_gender_age/insight_gender_age.onnx"};
|
||||
}
|
||||
|
||||
if((null == modelPath || modelPath.length != 1) && "SeetaMaskFaceKeyPoint".equalsIgnoreCase(modelName)){
|
||||
return new String[]{basePath + "model/onnx/keypoint_seeta_mask/landmarker_005_mask_pts5.onnx"};
|
||||
}
|
||||
|
||||
return modelPath;
|
||||
}
|
||||
}
|
||||
|
@ -22,6 +22,7 @@ logging:
|
||||
# 模型配置
|
||||
visual:
|
||||
model:
|
||||
baseModelPath: 'face-search-core/src/main/resources/'
|
||||
faceDetection:
|
||||
name: InsightScrfdFaceDetection
|
||||
modelPath:
|
||||
@ -83,8 +84,8 @@ spring:
|
||||
# 主库数据源
|
||||
master:
|
||||
url: jdbc:mysql://visual-face-search-mysql:3306/visual_face_search?useUnicode=true&characterEncoding=utf8&zeroDateTimeBehavior=convertToNull&useSSL=true&serverTimezone=GMT%2B8
|
||||
username: visual
|
||||
password: visual
|
||||
username: root
|
||||
password: root
|
||||
slave:
|
||||
# 从数据源开关/默认关闭
|
||||
enabled: false
|
||||
|
@ -22,6 +22,7 @@ logging:
|
||||
# 模型配置
|
||||
visual:
|
||||
model:
|
||||
baseModelPath: ${VISUAL_MODEL_BASE_MODEL_PATH:'/app/face-search/'}
|
||||
faceDetection:
|
||||
name: ${VISUAL_MODEL_FACEDETECTION_NAME:InsightScrfdFaceDetection}
|
||||
modelPath: ${VISUAL_MODEL_FACEDETECTION_PATH:}
|
||||
|
@ -22,6 +22,7 @@ logging:
|
||||
# 模型配置
|
||||
visual:
|
||||
model:
|
||||
baseModelPath: 'face-search-core/src/main/resources/'
|
||||
faceDetection:
|
||||
name: InsightScrfdFaceDetection
|
||||
modelPath:
|
||||
@ -48,7 +49,7 @@ visual:
|
||||
thread: 1
|
||||
engine:
|
||||
open-search:
|
||||
host: 172.16.36.229
|
||||
host: visual-face-search-opensearch
|
||||
port: 9200
|
||||
scheme: https
|
||||
username: admin
|
||||
@ -82,9 +83,9 @@ spring:
|
||||
druid:
|
||||
# 主库数据源
|
||||
master:
|
||||
url: jdbc:mysql://172.16.36.228:3306/visual_face_search?useUnicode=true&characterEncoding=utf8&zeroDateTimeBehavior=convertToNull&useSSL=false&serverTimezone=GMT%2B8
|
||||
username: visual
|
||||
password: visual
|
||||
url: jdbc:mysql://visual-face-search-mysql:3306/visual_face_search?useUnicode=true&characterEncoding=utf8&zeroDateTimeBehavior=convertToNull&useSSL=false&serverTimezone=GMT%2B8
|
||||
username: root
|
||||
password: root
|
||||
slave:
|
||||
# 从数据源开关/默认关闭
|
||||
enabled: false
|
||||
|
@ -6,7 +6,7 @@
|
||||
|
||||
<groupId>com.visual.face.search</groupId>
|
||||
<artifactId>face-search-test</artifactId>
|
||||
<version>2.0.1</version>
|
||||
<version>2.1.0</version>
|
||||
|
||||
<properties>
|
||||
<java.version>1.8</java.version>
|
||||
@ -24,9 +24,28 @@
|
||||
<dependency>
|
||||
<groupId>org.openpnp</groupId>
|
||||
<artifactId>opencv</artifactId>
|
||||
<version>4.6.0-0</version>
|
||||
<version>4.7.0-0</version>
|
||||
</dependency>
|
||||
|
||||
</dependencies>
|
||||
|
||||
<repositories>
|
||||
<repository>
|
||||
<id>public</id>
|
||||
<name>aliyun nexus</name>
|
||||
<url>https://maven.aliyun.com/repository/public</url>
|
||||
<releases>
|
||||
<enabled>true</enabled>
|
||||
</releases>
|
||||
</repository>
|
||||
<repository>
|
||||
<id>central</id>
|
||||
<name>aliyun nexus</name>
|
||||
<url>https://maven.aliyun.com/repository/central</url>
|
||||
<releases>
|
||||
<enabled>true</enabled>
|
||||
</releases>
|
||||
</repository>
|
||||
</repositories>
|
||||
|
||||
</project>
|
@ -19,9 +19,9 @@ public class FaceSearchExample {
|
||||
//本地开发模式
|
||||
public static String serverHost = "http://127.0.0.1:8080";
|
||||
//docker部署模式
|
||||
//public static String serverHost = "http://127.0.0.1:56789";
|
||||
// public static String serverHost = "http://172.16.24.124:56789";
|
||||
//远程测试服务
|
||||
//public static String serverHost = "http://face-search.diven.nat300.top";
|
||||
// public static String serverHost = "http://face-search.divenswu.com";
|
||||
public static String namespace = "namespace_1";
|
||||
public static String collectionName = "collect_20211201_v11";
|
||||
public static FaceSearch faceSearch = FaceSearch.build(serverHost, namespace, collectionName);
|
||||
@ -35,7 +35,16 @@ public class FaceSearchExample {
|
||||
List<FiledColumn> faceColumns = new ArrayList<>();
|
||||
faceColumns.add(FiledColumn.build().setName("label").setDataType(FiledDataType.STRING).setComment("标签1"));
|
||||
//待创建的人脸库信息
|
||||
Collect collect = Collect.build().setCollectionComment("人脸库").setSampleColumns(sampleColumns).setFaceColumns(faceColumns);
|
||||
Collect collect = Collect.build()
|
||||
.setCollectionComment("人脸库")
|
||||
//样本属性字段
|
||||
.setSampleColumns(sampleColumns)
|
||||
//人脸属性字段
|
||||
.setFaceColumns(faceColumns)
|
||||
//是否保存人脸及图片数据信息
|
||||
.setStorageFaceInfo(true)
|
||||
//目前只实现了数据库存储,对其他类型存储实现StorageImageService接口即可
|
||||
.setStorageEngine(StorageEngine.CURR_DB);
|
||||
//删除集合
|
||||
Response<Boolean> deleteCollect = faceSearch.collect().deleteCollect();
|
||||
System.out.println(deleteCollect);
|
||||
@ -66,9 +75,10 @@ public class FaceSearchExample {
|
||||
KeyValues faceData = KeyValues.build();
|
||||
faceData.add(KeyValue.build("label", "标签-" + name));
|
||||
String imageBase64 = Base64Util.encode(image.getAbsolutePath());
|
||||
Face face = Face.build(sampleId).setFaceData(faceData).setImageBase64(imageBase64)
|
||||
.setMinConfidenceThresholdWithThisSample(50f)
|
||||
.setMaxConfidenceThresholdWithOtherSample(50f);
|
||||
Face face = Face.build(sampleId).setFaceData(faceData)
|
||||
.setMinConfidenceThresholdWithThisSample(0f)
|
||||
.setMaxConfidenceThresholdWithOtherSample(50f)
|
||||
.setImageBase64(imageBase64);
|
||||
Response<FaceRep> createFace = faceSearch.face().createFace(face);
|
||||
System.out.println("createFace:" + createFace);
|
||||
}
|
||||
|
Before Width: | Height: | Size: 90 KiB |
58
pom.xml
@ -14,7 +14,7 @@
|
||||
<groupId>com.visual.face.search</groupId>
|
||||
<artifactId>face-search</artifactId>
|
||||
<packaging>pom</packaging>
|
||||
<version>2.0.1</version>
|
||||
<version>2.1.0</version>
|
||||
|
||||
|
||||
<modules>
|
||||
@ -27,10 +27,11 @@
|
||||
|
||||
<properties>
|
||||
<druid.version>1.1.22</druid.version>
|
||||
<opencv.version>4.6.0-0</opencv.version>
|
||||
<opensearch.version>2.4.0</opensearch.version>
|
||||
<onnxruntime.version>1.13.1</onnxruntime.version>
|
||||
<opencv.version>4.7.0-0</opencv.version>
|
||||
<opensearch.version>2.8.0</opensearch.version>
|
||||
<onnxruntime.version>1.15.1</onnxruntime.version>
|
||||
<fastjson.version>1.2.83</fastjson.version>
|
||||
<jackson.version>2.15.0</jackson.version>
|
||||
<hibernate.version>6.0.13.Final</hibernate.version>
|
||||
<commons-math3.version>3.6.1</commons-math3.version>
|
||||
<commons-collections4.version>4.1</commons-collections4.version>
|
||||
@ -146,6 +147,55 @@
|
||||
<artifactId>knife4j-spring-boot-starter</artifactId>
|
||||
<version>${knife4j-ui.version}</version>
|
||||
</dependency>
|
||||
|
||||
<!--jackson.version-->
|
||||
<!--这里注意,由于opensearch.client:2.8需要jackson:2.15.0以上的版本,否则会报错。-->
|
||||
<!--如果你不想升级jackson的版本,请将opensearch.client的版本降低到2.7或以下的版本。-->
|
||||
<dependency>
|
||||
<groupId>com.fasterxml.jackson.core</groupId>
|
||||
<artifactId>jackson-core</artifactId>
|
||||
<version>${jackson.version}</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.fasterxml.jackson.core</groupId>
|
||||
<artifactId>jackson-annotations</artifactId>
|
||||
<version>${jackson.version}</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.fasterxml.jackson.core</groupId>
|
||||
<artifactId>jackson-databind</artifactId>
|
||||
<version>${jackson.version}</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.fasterxml.jackson.dataformat</groupId>
|
||||
<artifactId>jackson-dataformat-cbor</artifactId>
|
||||
<version>${jackson.version}</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.fasterxml.jackson.dataformat</groupId>
|
||||
<artifactId>jackson-dataformat-smile</artifactId>
|
||||
<version>${jackson.version}</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.fasterxml.jackson.dataformat</groupId>
|
||||
<artifactId>jackson-dataformat-yaml</artifactId>
|
||||
<version>${jackson.version}</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.fasterxml.jackson.datatype</groupId>
|
||||
<artifactId>jackson-datatype-jdk8</artifactId>
|
||||
<version>${jackson.version}</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.fasterxml.jackson.datatype</groupId>
|
||||
<artifactId>jackson-datatype-jsr310</artifactId>
|
||||
<version>${jackson.version}</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.fasterxml.jackson.module</groupId>
|
||||
<artifactId>jackson-module-parameter-names</artifactId>
|
||||
<version>${jackson.version}</version>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
</dependencyManagement>
|
||||
|
||||
|
@ -21,7 +21,7 @@ services:
|
||||
|
||||
visual-opensearch:
|
||||
container_name: face-search-opensearch-standalone
|
||||
image: opensearchproject/opensearch:2.4.0
|
||||
image: opensearchproject/opensearch:2.8.0
|
||||
environment:
|
||||
discovery.type: single-node
|
||||
expose:
|
||||
@ -37,7 +37,7 @@ services:
|
||||
|
||||
visual-opensearch-dashboards:
|
||||
container_name: face-search-opensearch-dashboards
|
||||
image: opensearchproject/opensearch-dashboards:2.4.0
|
||||
image: opensearchproject/opensearch-dashboards:2.8.0
|
||||
environment:
|
||||
OPENSEARCH_HOSTS: '["https://visual-opensearch:9200"]'
|
||||
ports:
|
||||
@ -48,7 +48,7 @@ services:
|
||||
|
||||
visual-facesearch:
|
||||
container_name: face-search-server-standalone
|
||||
image: divenswu/face-search:2.0.1
|
||||
image: divenswu/face-search:2.1.0
|
||||
environment:
|
||||
SPRING_DATASOURCE_URL: 'jdbc:mysql://visual-mysql:3306/visual_face_search?useUnicode=true&characterEncoding=utf8'
|
||||
SPRING_DATASOURCE_USERNAME: root
|
||||
|
@ -1,4 +1,4 @@
|
||||
version='2.0.1'
|
||||
version='2.1.0'
|
||||
SHELL_FOLDER=$(cd "$(dirname "$0")";pwd)
|
||||
cd ${SHELL_FOLDER}
|
||||
|
||||
|