当前位置: 首页 > 知识库问答 >
问题:

OpenCV错误:图像步长错误(矩阵不连续)

陈文景
2023-03-14

当通过命令行启动我的程序时,出现这样的问题:OpenCV错误:图像步骤错误(矩阵不连续,因此它的行数不能改变)un CV::MAT::Reshape,文件C:\builds\2_4_packslave-win64-vc12-shared\OpenCV\modules\core\src\matrix.cpp,第802行。

#include "opencv2/core/core.hpp"
#include "opencv2/contrib/contrib.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/objdetect/objdetect.hpp"

#include <iostream>
#include <fstream>
#include <sstream>

using namespace cv;
using namespace std;

static void read_csv(const string& filename, vector<Mat>& images, vector<int>& labels, char separator = ';') {
    std::ifstream file(filename.c_str(), ifstream::in);
    if (!file) {
        string error_message = "No valid input file was given, please check the given filename.";
        CV_Error(CV_StsBadArg, error_message);
    }
    string line, path, classlabel;
    while (getline(file, line)) {
        stringstream liness(line);
        getline(liness, path, separator);
        getline(liness, classlabel);
        if (!path.empty() && !classlabel.empty()) {
            images.push_back(imread(path, 0));
            labels.push_back(atoi(classlabel.c_str()));
        }
    }
}

int main(int argc, const char *argv[]) {
    // Check for valid command line arguments, print usage
    // if no arguments were given.
    if (argc != 4) {
        cout << "usage: " << argv[0] << " </path/to/haar_cascade> </path/to/csv.ext> </path/to/device id>" << endl;
        cout << "\t </path/to/haar_cascade> -- Path to the Haar Cascade for face detection." << endl;
        cout << "\t </path/to/csv.ext> -- Path to the CSV file with the face database." << endl;
        cout << "\t <device id> -- The webcam device id to grab frames from." << endl;
        exit(1);
    }
    // Get the path to your CSV:
    string fn_haar = string(argv[1]);
    string fn_csv = string(argv[2]);
    int deviceId = atoi(argv[3]);
    // These vectors hold the images and corresponding labels:
    vector<Mat> images;
    vector<int> labels;
    // Read in the data (fails if no valid input filename is given, but you'll get an error message):
    try {
        read_csv(fn_csv, images, labels);
    }
    catch (cv::Exception& e) {
        cerr << "Error opening file \"" << fn_csv << "\". Reason: " << e.msg << endl;
        // nothing more we can do
        exit(1);
    }
    // Get the height from the first image. We'll need this
    // later in code to reshape the images to their original
    // size AND we need to reshape incoming faces to this size:
    int im_width = images[0].cols;
    int im_height = images[0].rows;
    // Create a FaceRecognizer and train it on the given images:
    Ptr<FaceRecognizer> model = createFisherFaceRecognizer();
    model->train(images, labels);
    // That's it for learning the Face Recognition model. You now
    // need to create the classifier for the task of Face Detection.
    // We are going to use the haar cascade you have specified in the
    // command line arguments:
    //
    CascadeClassifier haar_cascade;
    haar_cascade.load(fn_haar);
    // Get a handle to the Video device:
    VideoCapture cap(deviceId);
    // Check if we can use this device at all:
    if (!cap.isOpened()) {
        cerr << "Capture Device ID " << deviceId << "cannot be opened." << endl;
        return -1;
    }
    // Holds the current frame from the Video device:
    Mat frame;
    for (;;) {
        cap >> frame;
        // Clone the current frame:
        Mat original = frame.clone();
        // Convert the current frame to grayscale:
        Mat gray;
        cvtColor(original, gray, CV_BGR2GRAY);
        // Find the faces in the frame:
        vector< Rect_<int> > faces;
        haar_cascade.detectMultiScale(gray, faces);
        // At this point you have the position of the faces in
        // faces. Now we'll get the faces, make a prediction and
        // annotate it in the video. Cool or what?
        for (int i = 0; i < faces.size(); i++) {
            // Process face by face:
            Rect face_i = faces[i];
            // Crop the face from the image. So simple with OpenCV C++:
            Mat face = gray(face_i);
            // Resizing the face is necessary for Eigenfaces and Fisherfaces. You can easily
            // verify this, by reading through the face recognition tutorial coming with OpenCV.
            // Resizing IS NOT NEEDED for Local Binary Patterns Histograms, so preparing the
            // input data really depends on the algorithm used.
            //
            // I strongly encourage you to play around with the algorithms. See which work best
            // in your scenario, LBPH should always be a contender for robust face recognition.
            //
            // Since I am showing the Fisherfaces algorithm here, I also show how to resize the
            // face you have just found:
            Mat face_resized;
            cv::resize(face, face_resized, Size(im_width, im_height), 1.0, 1.0, INTER_CUBIC);
            // Now perform the prediction, see how easy that is:
            int prediction = model->predict(face_resized);
            // And finally write all we've found out to the original image!
            // First of all draw a green rectangle around the detected face:
            rectangle(original, face_i, CV_RGB(0, 255, 0), 1);
            // Create the text we will annotate the box with:
            string box_text = format("Prediction = %d", prediction);
            // Calculate the position for annotated text (make sure we don't
            // put illegal values in there):
            int pos_x = std::max(face_i.tl().x - 10, 0);
            int pos_y = std::max(face_i.tl().y - 10, 0);
            // And now put it into the image:
            putText(original, box_text, Point(pos_x, pos_y), FONT_HERSHEY_PLAIN, 1.0, CV_RGB(0, 255, 0), 2.0);
        }
        // Show the result:
        imshow("face_recognizer", original);
        // And display it:
        char key = (char)waitKey(20);
        // Exit this loop on escape:
        if (key == 27)
            break;
    }
    return 0;
}

共有1个答案

孙渝
2023-03-14

FisherFaceRecognizer(也是Eigen)试图将图像“扁平化”为一行(reshape())以进行训练和测试。

如果垫子是非连续的(因为它要么是填充的,要么是子垫子/ROI而已),这就不起作用了。

(再说一遍,“file notfound”也算“non-continuous”;])

或者,更改加载代码中的这一行:

images.push_back(imread(path, 0));

致:

Mat m = imread(path, 1);
Mat m2;
cvtColor(m,m2,CV_BGR_GRAY);
images.push_back(m2);
 类似资料:
  • 问题内容: 练习信: 给定一个mxn元素的矩阵(m行,n列),以螺旋顺序返回矩阵的所有元素。 例如,给定以下矩阵: 给定代码: 我的代码: 错误: 有什么建议?我真的无法分辨出什么问题。为什么超出范围?锻炼可以在这里找到 问题答案: 我用此矩阵尝试了您的方法: 我什么也没得到。您的代码似乎没有引发任何错误。 但是,我注意到输出不符合预期。它给我的输出是(只有8个数字),缺少矩阵中间的数字。 仔细查

  • 我有一个在docker容器中运行的spring-boot应用程序,其中安装了tesseract。 在Java程序中,我使用opencv预处理图像,如下所示 但是运行 给出错误: 图像太大: (1, 146327) 知道我哪里做错了吗?奇怪的是文件大小只有146kb,所以我不知道为什么宇宙魔方认为它太大了? 此外,如果我删除adaptiveThreshold步骤并直接在mat上执行<code>ime

  • 问题内容: 我的BinvA矩阵的(1,1)条目得到一个非常奇怪的值,我 只是想将B矩阵求逆,并进行(B ^ -1)A乘法。 我知道,当我手动进行计算时,我的(1,1)应该为0,但我得到1.11022302e-16。我该如何解决?我知道浮点数不能完全准确地表示出来,但是为什么这会给我这么不准确的响应,而不是四舍五入,有什么办法可以使我更准确呢? 她是我的代码: 我的印刷声明: 问题答案: 计算逆时,

  • cmake-d cmake_build_type=release-d cmake_install_prefix=/usr/local-d with_tbb=on-d build_new_python_support=on-d with_v4l=on-d install_c_examples=on-d install_python_examples=on-d build_examples=on-d

  • 症状是,“摄像头位置”似乎围绕x轴镜像(负z而非正z),并且“摄像头方向”与预期方向相反<换句话说,我必须将相机旋转180度,然后将其向前移动才能看到任何渲染 在我看过的所有OpenGl相机教程中,相机位置总是有一个正的z坐标。也许代码中只有一个符号错误,但我看不出来。我还发布了相应的着色器代码 我的对象在世界坐标z=0.1处渲染 摄影机实例的初始化显示在以下几行中 哪里 结果是黑屏。当我将相机位

  • 问题内容: 我想将图像的颜色基础从RGB更改为其他颜色。我有一个要应用于每个像素的RGB的矩阵M,我们可以将其定义为x ij。 我目前正在迭代NumPy图像的每个像素并手动计算Mx ij。我什至无法对它们进行矢量化处理,因为RGB是1x3而不是3x1数组。 有一个更好的方法吗?也许是OpenCV或NumPy中的函数? 问题答案: 记不清执行此操作的规范方法(可能避免了转置),但这应该可行: 如果是