'Programming > Install&Tip' 카테고리의 다른 글

sqlyog 무료버전  (0) 2011.09.05
[펌] 티스토리에 소스 코드 올리기 - Google SyntaxHighlihter  (0) 2010.06.09
linux install tip  (0) 2010.06.07
tar & gz & bz2 사용법  (0) 2009.05.31
Posted by 한효정
Posted by 한효정

'Architecture > Service' 카테고리의 다른 글

wkhtmltopdf 소개  (0) 2012.04.28
superfastmatch  (0) 2012.04.28
AR Digital Binocular Station brings Museum to Life  (0) 2010.05.27
iPad + Velcro  (0) 2010.05.23
The Future Of Nokia 2013  (0) 2010.05.23
Posted by 한효정

Discriminatively Trained Deformable Part Models

Version 4. Updated on April 21, 2010.

Over the past few years we have developed a complete learning-based system for detecting and localizing objects in images. Our system represents objects using mixtures of deformable part models. These models are trained using a discriminative method that only requires bounding boxes for the objects in an image. The approach leads to efficient object detectors that achieve state of the art results on the PASCAL and INRIA person datasets.

At a high level our system can be characterized by the combination of 
1) Strong low-level features based on histograms of oriented gradients (HOG).
2) Efficient matching algorithms for deformable part-based models (pictorial structures).
3) Discriminative learning with latent variables (latent SVM).

PASCAL VOC "Lifetime Achievement" Prize

Here you can download a complete implementation of our system. The current implementation extends the system in [2] as described in [3]. The models in this implementation are structured using the grammar formalism presented in [4]. Previous releases are available below. 

The distribution contains object detection and model learning code, as well as models trained on the PASCAL and INRIA Person datasets. This release also includes code for rescoring detections based on contextual information.

Also available (as a separate package) is the source code for a cascade version of the object detection system, which is described in [5].

The system is implemented in Matlab, with a few helper functions written in C/C++ for efficiency reasons. The software was tested on several versions of Linux and Mac OS X using Matlab versions R2009b and R2010a. There may be compatibility issues with other versions of Matlab.

For questions regarding the source code please contact Ross Girshick at (click the "..." to reveal the email address).

Source code and model download: voc-release4.tgz (updated on 04/21/10).
Warning: does not work with matlab 2010b. You should use or (see compile.m).
Cascade detection code: here

This project has been supported by the National Science Foundation under Grant No. 0534820, 0746569 and 0811340.


Slides from a presentation given at the 2009 Chicago Machine Learning Summer School and Workshop pdf.

[1] P. Felzenszwalb, D. McAllester, D. Ramaman.
A Discriminatively Trained, Multiscale, Deformable Part Model.
Proceedings of the IEEE CVPR 2008.

[2] P. Felzenszwalb, R. Girshick, D. McAllester, D. Ramanan.
Object Detection with Discriminatively Trained Part Based Models.
IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 32, No. 9, September 2010

[3] R. Girshick, P. Felzenszwalb, D. McAllester.
release4-notes.pdf -- also included in the download.

[4] P. Felzenszwalb, D. McAllester.
Object Detection Grammars.
University of Chicago, Computer Science TR-2010-02, February 2010.

[5] P. Felzenszwalb, R. Girshick, D. McAllester.
Cascade Object Detection with Deformable Part Models.
Proceedings of the IEEE CVPR 2010. 

How to cite

When citing our system, please cite reference [2] and the website for the specific release. The website bibtex reference is below.

 author = "Felzenszwalb, P. F. and Girshick, R. B. and McAllester, D.",
 title = "Discriminatively Trained Deformable Part Models, Release 4",
 howpublished = ""}

Example detections


Detection results — PASCAL datasets

The models included with the source code were trained on the train+val dataset from each year and evaluated on the corresponding test dataset. 
This is exactly the protocol of the "comp3" competition. Below are the average precision scores we obtain in each category.

Table 1. PASCAL VOC 2009 comp3
without context39.5 48.2 11.4 12.3 28.6 42.3 40.4 25.0 17.4 20.5 15.3 14.5 42.1 44.4 41.9 12.7 24.3 16.5 43.3 32.2 28.6
with context43.6 50.8 15.1 14.1 30.2 45.6 41.8 27.3 18.9 22.1 15.8 18.2 45.7 47.3 43.8 14.3 26.4 18.2 46.8 33.7 31.0
Table 2. PASCAL VOC 2007 comp3
without context28.9 59.5 10.0 15.2 25.5 49.6 57.9 19.3 22.4 25.2 23.3 11.1 56.8 48.7 41.9 12.2 17.8 33.6 45.1 41.6 32.3
with context31.2 61.5 11.9 17.4 27.0 49.1 59.6 23.1 23.0 26.3 24.9 12.9 60.1 51.0 43.2 13.4 18.8 36.2 49.1 43.0 34.1
Table 3. PASCAL VOC 2006 comp3
without context67.1 65.8 70.7 26.8 47.7 15.8 48.3 66.0 41.0 45.6 49.5
with context69.2 67.6 71.5 29.0 51.4 19.4 54.0 70.0 44.3 47.4 52.4

Detection Results — INRIA Person

We also trained and tested a model on the INRIA Person dataset.
We scored the model using the PASCAL evaluation methodology in the complete test dataset, including images without people.

INRIA Person average precision: 88.2

Plot of Recall / False positives per image (FPPI):

Previous Releases

back to pff's homepage

Posted by 한효정

2011. 7. 17. 09:04 Aphorism/Diary

용눈이 오름

iPhone 에서 작성된 글입니다.

'Aphorism > Diary' 카테고리의 다른 글

  (0) 2010.08.19
요즘의 나....  (0) 2010.06.09
效貞 [ 가르칠 효, 곧을 정 ]  (0) 2010.06.06
연애시대 다시보기  (0) 2010.06.05
이사  (0) 2010.06.01
Posted by 한효정

Methods and systems for automated identification of celebrity face images are provided that generate a name list of prominent celebrities, obtain a set of images and corresponding feature vectors for each name, detect faces within the set of images, and remove non-face images. An analysis of the...
Inventors: David ROSS, Andrew RABINOVICH, Anand PILLAI, Hartwig ADAM
Assignee: Google Inc.

Posted by 한효정

Scalable Face Image Retrieval with Identity-Based Quantization and Multi-Reference Re-ranking
Zhong Wu
Tsinghua Univ., Ctr Adv Study
Qifa Ke
, Jian Sun
Microsoft Research
Silicon Valley Lab,
Asia Lab
Heung-Yeung Shumy
Posted by 한효정

A simple object detector 
with boosting

ICCV 2005 short courses on 
Recognizing and Learning Object Categories 

Boosting provides a simple framework to develop robust object detection algorithms. This set of functions provide a minimal set to build an object detection algorithm. It is entirely written on Matlab in order to make it easily accesible as a teaching tool. Therefore, it is not appropriate for building real-time applications. 


Download the code and datasets
Download the LabelMe toolbox

Unzip both files. Modify the paths in initpath.m
Modify the folder paths in paramaters.m to point to the locations of the images and annotations.

Description of the functions 

initpath.m - Initializes the matlab path. You should run this command when you start the Matlab session.
paremeters.m - Contains parameters to configure the classifiers and the database.

Boosting tools
demoGentleBoost.m - simple demo of gentleBoost using stumps on two dimensions

createDatabases.m - creates the training and test database using the LabelMe database.
createDictionary.m - creates a dictionary of filtered patches from the target object.
computeFeatures.m - precomputes the features of all images and stores the feature outputs on the center of the target object and on a sparse set of locations from the background.
trainDetector.m - creates the training and test database using the LabelMe database
runDetector.m - runs the detector on test images

Features and weak detectors
convCrossConv.m - Weak detector: computes template matching with a localized patch in object centered coordinates.

singleScaleBoostedDetector.m - runs the strong classifier on an image at a single scale and outputs bounding boxes and scores.

LabelMe toolbox 
LabelMe - Describes the utility functions used to manipulate the database


First run initpath.m and modify the folder paths in the script parameters.m 

First run the Boosting demo demoGentleBoost.m

This demo will first ask for a set of points in 2D to be used a training data (Left button = class +1, right button = class -1). The classifier will only be able to perform simple discrimination tasks as it uses stumps as weak classifiers (i.e., only lines parallel to the axis). If you use weak classifiers to be lines with any orientation, then you will get more interesting boundaries easily. However, stumps are frequently used in object detection as they can be used to do efficient feature selection. This demo will show you the limits of stumps. In object detection, some of these limitations are compensated by using a very large number of features.
A look to the database
This is a sample of the images used for this demo. They contain cars (side views) and screens (frontal views), with normalized scale. They are a small subset of the LabelMe dataset. The program createDatabase.m shows how the database used for this demo was created. 
If you download the full database, the first thing you have to do is to actualize the folders in parameters.m. Then, you have to run the program createDatabase.m which will read all the annotation files and will create a struct that will be used later by the query tools. For more information about how the query tools work, you can check the LabelMe Toolbox.

Running the detector
Before trying to train your own detector, you can try the script runDetector.m. If everything is setup right, the output should look like: 
Here there is an example of the output of the detector when trained to detected side views of cars: 

Training a new detector
To train a new detector, first you need to collect a new set of images. If you use the full LabelMe database, then, you will only need to change the object name in the program parameters.m to indicate the object category you want to detect. Also, in parameters.m you can change training parameters such as the number of training images, the size of the patches, the scale of the object, the number of negative examples, etc.

createDictionary.m will create the vocabulary of patches used to compute the features.

computeFeatures.m will precompute all the features for the training images.

trainDetector.m will train the detector using Gentle Boosting [1].

Every one of these programs adds information to the 'data' struct which will contain information such as the precomputed features, list of images used for training, the dictionary of features, the parameters of the classifier. 

Finally, with runDetector.m you can run the new detector.

Multiscale detector

In order to build a multiscale detectors, you need to loop on scales. Something like this: 
scalingStep = 0.8;
for scale = 1:Nscales
   img = imresize(img, scalingStep, 'bilinear');
   [Score{scale}, boundingBox{scale}, boxScores{scale}] = singleScaleBoostedDetector(img, data);

[1] Friedman, J. H., Hastie, T. and Tibshirani, R., "Additive Logistic Regression: a Statistical View of Boosting." (Aug. 1998) 

[2] A. Torralba, K. P. Murphy and W. T. Freeman. (2004). "Sharing features: efficient boosting procedures for multiclass object detection". Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR). pp 762- 769.
Posted by 한효정

블로그 이미지
착하게 살자.





 « |  » 2024.5
1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31

최근에 올라온 글

최근에 달린 댓글

글 보관함