Jump to

2014 · 17 papers

Fast approximate $l_\infty$ minimization: Speeding up robust regression

F. Shen, C. Shen, R. Hill, A. van den Hengel, Z. Tang

Citation:
F. Shen, C. Shen, R. Hill, A. van den Hengel, Z. Tang. Fast approximate $l_\infty$ minimization: Speeding up robust regression. Computational Statistics and Data Analysis. volume: 77, pages: 25--37. 2014.

    Minimization of the $L_\infty$ norm, which can be viewed as approximately solving the non-convex least median estimation problem, is a powerful method for outlier removal and hence robust regression. However, current techniques for solving the problem at the heart of $L_\infty$ norm minimization are slow, and therefore cannot scale to large problems. A new method for the minimization of the $L_\infty$ norm is presented here, which provides a speedup of multiple orders of magnitude for data with high dimension. This method, termed Fast $L_\infty$ Minimization, allows robust regression to be applied to a class of problems which were previously inaccessible. It is shown how the $L_\infty$ norm minimization problem can be broken up into smaller sub-problems, which can then be solved extremely efficiently. Experimental results demonstrate the radical reduction in computation time, along with robustness against large numbers of outliers in a few model-fitting problems.
 @article{Shen2014Outlier,
   author    = "F. Shen and  C. Shen and  R. Hill and  A. {van den Hengel} and  Z. Tang",
   title     = "Fast approximate $l_\infty$ minimization: {S}peeding up robust regression",
   journal   = "Computational Statistics and Data Analysis",
   volume    = "77",
   pages     = "25--37",
   year      = "2014",
 }

Multiple kernel learning in the primal for multi-modal Alzheimer's disease classification

F. Liu, L. Zhou, C. Shen, J. Yin

Citation:
F. Liu, L. Zhou, C. Shen, J. Yin. Multiple kernel learning in the primal for multi-modal Alzheimer's disease classification. IEEE Journal of Biomedical and Health Informatics. 2014.
[Online published at IEEE: 10 October 2013]

    To achieve effective and efficient detection of Alzheimer's disease (AD), many machine learning methods have been introduced into this realm. However, the general case of limited training samples, as well as different feature representations typically makes this problem challenging. In this work, we propose a novel multiple kernel learning framework to combine multi-modal features for AD classification, which is scalable and easy to implement. Contrary to the usual way of solving the problem in the dual space, we look at the optimization from a new perspective. By conducting Fourier transform on the Gaussian kernel, we explicitly compute the mapping function, which leads to a more straightforward solution of the problem in the primal space. Furthermore, we impose the mixed $L_{21}$ norm constraint on the kernel weights, known as the group lasso regularization, to enforce group sparsity among different feature modalities. This actually acts as a role of feature modality selection, while at the same time exploiting complementary information among different kernels. Therefore it is able to extract the most discriminative features for classification. Experiments on the ADNI data set demonstrate the effectiveness of the proposed method.
 @article{Liu2014MKL,
   author    = "F. Liu and  L. Zhou and  C. Shen and  J. Yin",
   title     = "Multiple kernel learning in the primal for multi-modal {A}lzheimer's disease classification",
   journal   = "IEEE Journal of Biomedical and Health Informatics",
   url       = "http://dx.doi.org/10.1109/JBHI.2013.2285378",
   year      = "2014",
 }

Multiple kernel clustering based on centered kernel alignment

Y. Lu, L. Wang, J. Lu, J. Yang, C. Shen

Citation:
Y. Lu, L. Wang, J. Lu, J. Yang, C. Shen. Multiple kernel clustering based on centered kernel alignment. Pattern Recognition. volume: 47, number: 11, pages: 3656--3664. 2014.

 @article{MKL2014,
   author    = "Y. Lu and  L. Wang and  J. Lu and  J. Yang and  C. Shen",
   title     = "Multiple kernel clustering based on centered kernel alignment",
   journal   = "Pattern Recognition",
   volume    = "47",
   number    = "11",
   pages     = "3656--3664",
   year      = "2014",
 }

Large-margin learning of compact binary image encodings

S. Paisitkriangkrai, C. Shen, A. van den Hengel

Citation:
S. Paisitkriangkrai, C. Shen, A. van den Hengel. Large-margin learning of compact binary image encodings. IEEE Transactions on Image Processing. volume: 23, number: 9, pages: 4041--4054. 2014.

 @article{Paul2014TIPb,
   author    = "S. Paisitkriangkrai and  C. Shen and  A. {van den Hengel}",
   title     = "Large-margin learning of compact binary image encodings",
   journal   = "IEEE Transactions on Image Processing",
   volume    = "23",
   number    = "9",
   pages     = "4041--4054",
   year      = "2014",
 }

Efficient semidefinite spectral clustering via Lagrange duality

Y. Yan, C. Shen, H. Wang

Citation:
Y. Yan, C. Shen, H. Wang. Efficient semidefinite spectral clustering via Lagrange duality. IEEE Transactions on Image Processing. volume: 23, number: 8, pages: 3522--3534. 2014.

 @article{Yan2014TIPa,
   author    = "Y. Yan and  C. Shen and  H. Wang",
   title     = "Efficient semidefinite spectral clustering via {L}agrange duality",
   journal   = "IEEE Transactions on Image Processing",
   volume    = "23",
   number    = "8",
   pages     = "3522--3534",
   year      = "2014",
 }

Characterness: An indicator of text in the wild

Y. Li, W. Jia, C. Shen, A. van den Hengel

Citation:
Y. Li, W. Jia, C. Shen, A. van den Hengel. Characterness: An indicator of text in the wild. IEEE Transactions on Image Processing. volume: 23, number: 4, pages: 1666--1677. 2014.

    Text in an image provides vital information for interpreting its contents, and text in a scene can aide with a variety of tasks from navigation, to obstacle avoidance, and odometry. Despite its value, however, identifying general text in images remains a challenging research problem. Motivated by the need to consider the widely varying forms of natural text, we propose a bottom-up approach to the problem which reflects the 'characterness' of an image region. In this sense our approach mirrors the move from saliency detection methods to measures of `objectness'. In order to measure the characterness we develop three novel cues that are tailored for character detection, and a Bayesian method for their integration. Because text is made up of sets of characters, we then design a Markov random field (MRF) model so as to exploit the inherent dependencies between characters. We experimentally demonstrate the effectiveness of our characterness cues as well as the advantage of Bayesian multi-cue integration. The proposed text detector outperforms state-of-the-art methods on a few benchmark scene text detection datasets. We also show that our measurement of 'characterness' is superior than state-of-the-art saliency detection models when applied to the same task.
 @article{Li2014TIP,
   author    = "Y. Li and  W. Jia and  C. Shen and  A. {van den Hengel}",
   title     = "Characterness: {A}n indicator of text in the wild",
   journal   = "IEEE Transactions on Image Processing",
   volume    = "23",
   number    = "4",
   pages     = "1666--1677",
   url       = "http://dx.doi.org/10.1109/TIP.2014.2302896",
   year      = "2014",
 }

Context-aware hypergraph construction for robust spectral clustering

X. Li, W. Hu, C. Shen, A. Dick, Z. Zhang

Citation:
X. Li, W. Hu, C. Shen, A. Dick, Z. Zhang. Context-aware hypergraph construction for robust spectral clustering. IEEE Transactions on Knowledge and Data Engineering. volume: 26, number: 10, pages: 2588--2597. 2014.

 @article{Li2013Hyper,
   author    = "X. Li and  W. Hu and  C. Shen and  A. Dick and  Z. Zhang",
   title     = "Context-aware hypergraph construction for robust spectral clustering",
   journal   = "IEEE Transactions on Knowledge and Data Engineering",
   volume    = "26",
   number    = "10",
   pages     = "2588--2597",
   url       = "http://doi.ieeecomputersociety.org/10.1109/TKDE.2013.126",
   year      = "2014",
 }

Asymmetric pruning for learning cascade detectors

S. Paisitkriangkrai, C. Shen, A. van den Hengel

Citation:
S. Paisitkriangkrai, C. Shen, A. van den Hengel. Asymmetric pruning for learning cascade detectors. IEEE Transactions on Multimedia. volume: 16, number: 5, pages: 1254--1267. 2014.

 @article{Paul2013TMM,
   author    = "S. Paisitkriangkrai and  C. Shen and  A. {van den Hengel}",
   title     = "Asymmetric pruning for learning cascade detectors",
   journal   = "IEEE Transactions on Multimedia",
   volume    = "16",
   number    = "5",
   pages     = "1254--1267",
   url       = "http://dx.doi.org/10.1109/TMM.2014.2308723",
   year      = "2014",
 }

Efficient dual approach to distance metric learning

C. Shen, J. Kim, F. Liu, L. Wang, A. van den Hengel

Citation:
C. Shen, J. Kim, F. Liu, L. Wang, A. van den Hengel. Efficient dual approach to distance metric learning. IEEE Transactions on Neural Networks and Learning Systems. volume: 25, number: 2, pages: 394--406. 2014.

    Distance metric learning is of fundamental interest in machine learning because the distance metric employed can significantly affect the performance of many learning methods. Quadratic Mahalanobis metric learning is a popular approach to the problem, but typically requires solving a semidefinite programming (SDP) problem, which is computationally expensive. Standard interior-point SDP solvers typically have a complexity of O(D^6.5) (with D the dimension of input data), and can thus only practically solve problems exhibiting less than a few thousand variables. Since the number of variables is D(D+1)/2, this implies a limit upon the size of problem that can practically be solved of around a few hundred dimensions. The complexity of the popular quadratic Mahalanobis metric learning approach thus limits the size of problem to which metric learning can be applied. Here we propose a significantly more efficient approach to the metric learning problem based on the Lagrange dual formulation of the problem. The proposed formulation is much simpler to implement, and therefore allows much larger Mahalanobis metric learning problems to be solved. The time complexity of the proposed method is O(D^3), which is significantly lower than that of the SDP approach. Experiments on a variety of datasets demonstrate that the proposed method achieves an accuracy comparable to the state-of-the-art, but is applicable to significantly larger problems. We also show that the proposed method can be applied to solve more general Frobenius-norm regularized SDP problems approximately.
 @article{Shen2014Metric,
   author    = "C. Shen and  J. Kim and  F. Liu and  L. Wang and  A. {van den Hengel}",
   title     = "Efficient dual approach to distance metric learning",
   journal   = "IEEE Transactions on Neural Networks and Learning Systems",
   volume    = "25",
   number    = "2",
   pages     = "394--406",
   year      = "2014",
 }

A scalable stage-wise approach to large-margin multi-class loss based boosting

S. Paisitkriangkrai, C. Shen, A. van den Hengel

Citation:
S. Paisitkriangkrai, C. Shen, A. van den Hengel. A scalable stage-wise approach to large-margin multi-class loss based boosting. IEEE Transactions on Neural Networks and Learning Systems. volume: 25, number: 5, pages: 1002--1013. 2014.

 @article{Paul2013Fastboosting,
   author    = "S. Paisitkriangkrai and  C. Shen and  A. {van den Hengel}",
   title     = "A scalable stage-wise approach to large-margin multi-class loss based boosting",
   journal   = "IEEE Transactions on Neural Networks and Learning Systems",
   volume    = "25",
   number    = "5",
   pages     = "1002--1013",
   url       = "http://dx.doi.org/10.1109/TNNLS.2013.2282369",
   year      = "2014",
 }

RandomBoost: Simplified multi-class boosting through randomization

S. Paisitkriangkrai, C. Shen, Q. Shi, A. van den Hengel

Citation:
S. Paisitkriangkrai, C. Shen, Q. Shi, A. van den Hengel. RandomBoost: Simplified multi-class boosting through randomization. IEEE Transactions on Neural Networks and Learning Systems. volume: 25, number: 4, pages: 764--779. 2014.

 @article{Paisitkriangkrai2013RandomBoost,
   author    = "S. Paisitkriangkrai and  C. Shen and  Q. Shi and  A. {van den Hengel}",
   title     = "{RandomBoost}: {S}implified multi-class boosting through randomization",
   journal   = "IEEE Transactions on Neural Networks and Learning Systems",
   volume    = "25",
   number    = "4",
   pages     = "764--779",
   url       = "http://dx.doi.org/10.1109/TNNLS.2013.2281214",
   year      = "2014",
 }

StructBoost: Boosting methods for predicting structured output variables

C. Shen, G. Lin, A. van den Hengel

Citation:
C. Shen, G. Lin, A. van den Hengel. StructBoost: Boosting methods for predicting structured output variables. IEEE Transactions on Pattern Analysis and Machine Intelligence. volume: 36, number: 10, pages: 2089--2103. 2014.

    Boosting is a method for learning a single accurate predictor by linearly combining a set of less accurate weak learners. Recently, structured learning has found many applications in computer vision. Thus far it has not been clear how one can train a boosting model that is directly optimized for predicting multivariate or structured outputs. To bridge this gap, inspired by structured support vector machines (SSVM), here we propose a boosting algorithm for structured output prediction, which we refer to as StructBoost. StructBoost supports nonlinear structured learning by combining a set of weak structured learners. As SSVM generalizes SVM, our StructBoost generalizes standard boosting approaches such as AdaBoost, or LPBoost to structured learning. The resulting optimization problem of StructBoost is more challenging than SSVM in the sense that it may involve exponentially many variables and constraints. In contrast, for SSVM one usually has an exponential number of constraints and a cutting-plane method is used. In order to efficiently solve StructBoost, we formulate an equivalent 1-slack formulation and solve it using a combination of cutting planes and column generation. We show the versatility and usefulness of StructBoost on a range of problems such as optimizing the tree loss for hierarchical multi-class classification, optimizing the Pascal overlap criterion for robust visual tracking and learning conditional random field parameters for image segmentation.
 @article{Shen2014SBoosting,
   author    = "C. Shen and  G. Lin and  A. {van den Hengel}",
   title     = "{StructBoost}: {B}oosting methods for predicting structured output variables",
   journal   = "IEEE Transactions on Pattern Analysis and Machine Intelligence",
   volume    = "36",
   number    = "10",
   pages     = "2089--2103",
   url       = "http://dx.doi.org/10.1109/TPAMI.2014.2315792",
   year      = "2014",
 }

A hierarchical word-merging algorithm with class separability measure

L. Wang, L. Zhou, C. Shen, L. Liu, H. Liu

Citation:
L. Wang, L. Zhou, C. Shen, L. Liu, H. Liu. A hierarchical word-merging algorithm with class separability measure. IEEE Transactions on Pattern Analysis and Machine Intelligence. volume: 36, number: 3, pages: 417--435. 2014.

    In image recognition with the bag-of-features model, a small-sized visual codebook is usually preferred to obtain a low- dimensional histogram representation and high computational efficiency. Such a visual codebook has to be discriminative enough to achieve excellent recognition performance. To create a compact and discriminative codebook, in this paper we propose to merge the visual words in a large-sized initial codebook by maximally preserving class separability. We first show that this results in a difficult optimization problem. To deal with this situation, we devise a suboptimal but very efficient hierarchical word-merging algorithm, which optimally merges two words at each level of the hierarchy. By exploiting the characteristics of the class separability measure and designing a novel indexing structure, the proposed algorithm can hierarchically merge 10,000 visual words down to two words in merely 90 seconds. Also, to show the properties of the proposed algorithm and reveal its advantages, we conduct detailed theoretical analysis to compare it with another hierarchical word-merging algorithm that maximally preserves mutual information, obtaining interesting findings. Experimental studies are conducted to verify the effectiveness of the proposed algorithm on multiple benchmark data sets. As shown, it can efficiently produce more compact and discriminative codebooks than the state-of-the-art hierarchical word- merging algorithms, especially when the size of the codebook is significantly reduced.
 @article{Wang2014PAMI,
   author    = "L. Wang and  L. Zhou and  C. Shen and  L. Liu and  H. Liu",
   title     = "A hierarchical word-merging algorithm with class separability measure",
   journal   = "IEEE Transactions on Pattern Analysis and Machine Intelligence",
   volume    = "36",
   number    = "3",
   pages     = "417--435",
   year      = "2014",
 }

Fast supervised hashing with decision trees for high-dimensional data

G. Lin, C. Shen, Q. Shi, A. van den Hengel, D. Suter

Citation:
G. Lin, C. Shen, Q. Shi, A. van den Hengel, D. Suter. Fast supervised hashing with decision trees for high-dimensional data. IEEE Conference on Computer Vision and Pattern Recognition (CVPR'14). 2014.

    Supervised hashing aims to map the original features to compact binary codes that are able to preserve label based similarity in the Hamming space. Non-linear hash functions have demonstrated the advantage over linear ones due to their powerful generalization capability. In the literature, kernel functions are typically used to achieve non-linearity in hashing, which achieve encouraging retrieval perfor- mance at the price of slow evaluation and training time. Here we propose to use boosted decision trees for achieving non-linearity in hashing, which are fast to train and evaluate, hence more suitable for hashing with high dimensional data. In our approach, we first propose sub-modular formulations for the hashing binary code inference problem and an efficient GraphCut based block search method for solving large-scale inference. Then we learn hash func- tions by training boosted decision trees to fit the binary codes. Experiments demonstrate that our proposed method significantly outperforms most state-of-the-art methods in retrieval precision and training time. Especially for high- dimensional data, our method is orders of magnitude faster than many methods in terms of training time.
 @inproceedings{CVPR14Lin,
   author    = "G. Lin and  C. Shen and  Q. Shi and  A. {van den Hengel} and  D. Suter",
   title     = "Fast supervised hashing with decision trees for high-dimensional data",
   booktitle = "IEEE Conference on Computer Vision and Pattern Recognition (CVPR'14)",
   address   = "Columbus, Ohio, USA",
   url       = "https://bitbucket.org/chhshen/fasthash/src",
   year      = "2014",
 }

Optimizing ranking measures for compact binary code learning

G. Lin, C. Shen, J. Wu

Citation:
G. Lin, C. Shen, J. Wu. Optimizing ranking measures for compact binary code learning. European Conference on Computer Vision (ECCV'14). 2014.

 @inproceedings{ECCV14Lin,
   author    = "G. Lin and  C. Shen and  J. Wu",
   title     = "Optimizing ranking measures for compact binary code learning",
   booktitle = "European Conference on Computer Vision (ECCV'14)",
   address   = "Zurich",
   year      = "2014",
 }

Strengthening the effectiveness of pedestrian detection with spatially pooled features

S. Paisitkriangkrai, C. Shen, A. van den Hengel

Citation:
S. Paisitkriangkrai, C. Shen, A. van den Hengel. Strengthening the effectiveness of pedestrian detection with spatially pooled features. European Conference on Computer Vision (ECCV'14). 2014.

 @inproceedings{ECCV14Paul,
   author    = "S. Paisitkriangkrai and  C. Shen and  A. {van den Hengel}",
   title     = "Strengthening the effectiveness of pedestrian detection with spatially pooled features",
   booktitle = "European Conference on Computer Vision (ECCV'14)",
   address   = "Zurich",
   year      = "2014",
 }

Encoding high dimensional local features by sparse coding based Fisher vectors

L. Liu, C. Shen, L. Wang, A. van den Hengel, C. Wang

Citation:
L. Liu, C. Shen, L. Wang, A. van den Hengel, C. Wang. Encoding high dimensional local features by sparse coding based Fisher vectors. Advances in Neural Information Processing Systems (NIPS'14). 2014.

 @inproceedings{Liu2014Fisher,
   author    = "L. Liu and  C. Shen and  L. Wang and  A. {van den Hengel} and  C. Wang",
   title     = "Encoding high dimensional local features by sparse coding based {F}isher vectors",
   booktitle = "Advances in Neural Information Processing Systems (NIPS'14)",
   address   = "Montreal, Canada",
   year      = "2014",
 }