product information The histogram of all 576 pixel locations accumulates the sample weights that fall into each of the bins from positive samples and negative samples. Therefore, this would produce 576 histograms for both positive samples and negative samples with each histogram having 511 bins. In order to design a weak classifier for the 1st pixel location, the histograms of both the positive samples and the negative samples are compared by searching for larger accumulated weights bin by bin. The decision of classification for one bin is made depending on the side having a larger accumulated weight. Then, an error rate of bin classification will be the value of the accumulated weight of the side having the smaller weight. For example, if the accumulated weights of the 35th bin at the 1st pixel location are 0.
005 and 0.001 from positive samples and negative samples, respectively, the decision of classification for the LPR feature value at the pixel location is to be positive and the error rate of classification for the LPR feature value at the pixel location is 0.001. The total error rate of a certain pixel location is calculated by adding its bin error rates as shown in Algorithm 1(2)(b).Figure 4Selection of weak classifier for each round in AdaBoost learning. At first, LPR images are extracted from the training images. The histogram of LPR is generated by accumulative weights of samples according to the LPR feature values of the same pixel location. …The best pixel location with the smallest error rate is chosen as the weak classifier for each round in AdaBoost learning.
However, if the number of pixel locations for combining a strong classifier is limited to less than n and the n pixel locations are already selected in the previous rounds, the best pixel location must be chosen by comparing the error rates among the selected pixel locations in previous rounds. By doing so, no matter how the number of rounds increases, the number of combined classifiers for constructing a strong classifier can be fixed, while the performance of the strong classifier improves.With Ew[y | x] considered as the best selected weak classifier, the lookup table for the weak classifier can be formulated as follows.
Since y is a binary label of either +1 or ?1, Ew[y | x] can be expressed asEw[y?�O?x]=Pw(y=1?�O?x)?Pw(y=?1?�O?x)=Pw(x?�O?y=1)P(y=1)Pw(x)?Pw(x?�O?y=?1)P(y=?1)Pw(x)=Pw(x?�O?y=1)P(y=1)?Pw(x?�O?y=?1)P(y=?1)Pw(x?�O?y=1)P(y=1)+Pw(x?�O?y=?1)P(y=?1),(7)where Anacetrapib x is an input vector, y is a desired label, and Pw(x | y) is a probability of x given y. We define that g(x, ��) is the value of the ��th bin of the histogram at pixel location x, and Ppos and Pneg are the ratios of the sum of positive weights and negative weights, respectively, to the sum of all the weights, Ppos = ��jposwjt/��iwit and Pneg = ��jnegwjt/��iwit.