Jeudi 11 juin 2015

Scattering tomography with path integral
Toru Tamaki, Hiroshima University, Japan
Abstract: Tomography, an inverse problem to recover inside an object by probing light and observing its output, is an important issue in physics, medical imaging, computer vision and related research fields. In this talk, we will present an approach to optical scattering tomography that uses the path integral, which has been developed for volume rendering, to model the light transport. We make assumptions on a specific scattering model to facilitate the computation, then formulate an inverse problem to be solved by an interior point method. We will show some simulation results, comparisons with diffusion optical tomography (DOT), and conclude with future directions.

Mercredi 13 mai 2015

Understanding deep features with computergenerated imagery
Mathieu Aubry, LIGM/ENPC, currently at UC Berkeley
Abstract: We introduce an approach for analyzing the variation of features generated by convolutional neural networks (CNNs) trained on large image datasets with respect to scene factors that occur in natural images. Such factors may include object style, 3D viewpoint, color, and scene lighting configuration. Our approach analyzes CNN feature responses with respect to different scene factors by controlling for them via rendering using a large database of 3D CAD models. The rendered images are presented to a trained CNN and responses for different layers are studied with respect to the input scene factors. We perform a linear decomposition of the responses based on knowledge of the input scene factors and analyze the resulting components. In particular, we quantify their relative importance in the CNN responses and visualize them using principal component analysis. We show qualitative and quantitative results of our study on three trained CNNs: AlexNet, Places, and Oxford VGG. We observe important differences across the different networks and CNN layers with respect to different scene factors and object categories. Finally, we demonstrate that our analysis based on computergenerated imagery translates to the network representation of natural images.

Mardi 12 mai 2015

Efficient Incremental Computation of Families of Component Trees
Ronaldo Fumio Hashimoto, University of São Paulo, Brazil
Abstract: Component tree allows an image to be represented as a hierarchy of connected components. These components are directly related to the neighborhood chosen to obtain them and, in particular, a family of component trees built with increasing neighborhoods allows the linking of nodes of different trees according to their inclusion relation, adding a sense of scale as we travel along them. In this talk, we present a class of neighborhoods and show that this class is suited to the construction of a family of trees. Then, we provide an algorithm that benefits from the properties of this class, which reuses the previous computation in order to construct the entire family of component trees efficiently.

Jeudi 7 mai 2015

Labeling the world
Bryan C. Russell, Adobe Research
Abstract: Much of our 3D visual world is described in auxiliary data, such as online text and maps. In this talk I will describe two works that reason about such auxiliary data, together with reconstructed 3D geometry.
In the first part I'll describe an approach for analyzing Wikipedia and other text, together with online photos, to produce annotated 3D models of famous tourist sites. The approach is completely automated, and leverages online text and photo cooccurrences via Google Image Search. It enables a number of interactions, which we demonstrate in a novel 3D visualization tool. Text can be selected to move the camera to the corresponding objects, 3D bounding boxes provide anchors back to the text describing them, and the overall narrative of the text provides a temporal guide for automatically flying through the scene to visualize the world as you read about it. We show compelling results on several major tourist sites.
In the second part I'll describe an approach for analyzing annotated maps of a site, together with Internet photos, to reconstruct large indoor spaces of famous tourist sites. While current 3D reconstruction algorithms often produce a set of disconnected components (3D pieces) for indoor scenes due to scene coverage or matching failures, we make use of a provided map to lay out the 3D pieces in a global coordinate system. Our approach leverages position, orientation, and shape cues extracted from the map and 3D pieces and optimizes a global objective to recover the global layout of the pieces. We introduce a novel crowd flow cue that measures how people move across the site to recover 3D geometry orientation. We show compelling results on major tourist sites.
Bio: Bryan Russell is a Research Scientist in the Creative Technologies Lab at Adobe Research in San Francisco. He received his Ph.D. from MIT in the Computer Science and Artificial Intelligence Laboratory in 2008 under the supervision of Professors William T. Freeman and Antonio Torralba. He was a postdoctoral fellow from 20082010 in the INRIA Willow team at the Département d'Informatique of Ecole Normale Supérieure in Paris, France. He was a research scientist with Intel Labs from 20122014 as part of the Intel Science and Technology Center for Visual Computing (ISTCVC) and has been affiliated with the University of Washington since 2011.

Mercredi 15 avril 2015

Stable radial distortion calibration by polynomial matrix inequalities programming
Tomas Pajdla, Czech Technical University in Prague
Abstract: Polynomial and rational functions are the number one choice when it comes to modeling of radial distortion of lenses. However, several extrapolation and numerical issues may arise while using these functions that have not been covered by the literature much so far. In this paper, we identify these problems and show how to deal with them by enforcing nonnegativity of certain polynomials. Further, we show how to model these nonnegativities using polynomial matrix inequalities (PMI) and how to estimate the radial distortion parameters subject to PMI constraints using semidefinite programming (SDP). Finally, we suggest several approaches on how to incorporate the proposed method into the overall camera calibration procedure.

Mercredi 8 avril 2015

Image partitioning into convex polygons
Dorothy Duan, INRIA SophiaAntipolis
Abstract: The oversegmentation of images into atomic regions has become a standard and powerful tool in Vision. Traditional superpixel methods, that operate at the pixel level, cannot directly capture the geometric information disseminated into the images. We propose an alternative to these methods by operating at the level of geometric shapes. Our algorithm partitions images into convex polygons. It presents several interesting properties in terms of geometric guarantees, region compactness and scalability. The overall strategy consists in building a Voronoi diagram that conforms to preliminarily detected linesegments, before homogenizing the partition by spatial point process distributed over the image gradient. Our method is particularly adapted to images with strong geometric signatures, typically manmade objects and environments. We show the potential of our approach with experiments on largescale images and comparisons with stateoftheart superpixel methods.

Lundi 30 mars 2015

PiecewisePlanar Surface Reconstruction via Convex Decomposition
Pierre Alliez, INRIA SophiaAntipolis
Abstract: We introduce a novel piecewiseplanar surface reconstruction method to create compact triangle surface meshes from unoriented point clouds. Through multiscale region growing of Hausdorff errorbounded convex planar primitives, we infer both shape and connectivity of the input and generate a simplicial complex that efficiently captures large flat regions as well as small features and boundaries. Imposing convexity of primitives is shown to be crucial to both the robustness and efficacy of our approach. We provide a variety of results from both synthetic and real point clouds.
This is a joint work with Simon Giraudot, David CohenSteiner and Mathieu Desbrun.

Jeudi 12 mars 2015

Randomly Walking Can Get You Lost: Spectral Clustering with Unknown Edge Weights
Hanno Ackermann, Leibniz Universität Hannover
Abstract: This talk consists of two parts. The first part focuses on the basics of spectral clustering, an algorithm for segmenting a graph into two or more disjoint subsets. The application of image segmentation is used to explain the "howto" of spectral clustering. Instead of restating standard proofs, we present an intuition of the geometry of spectral clustering.
The second part of the talk focuses on a problem often occurring in computer vision applications which has been ignored so far: particular edge weights might not be computable. If, for instance, each vertex represents a trajectory of a 2d feature point tracked through an image sequence, some trajectories do not cooccurr in the same image due to occlusion or tracking failure. In these cases, the weight of the corresponding edge cannot be computed. Along with a formal analysis of this effect on the algebraic connectivity, we present a solution which greatly improves the results. The effectiveness of the proposed solution is shown using simulated data and real image sequences.

Jeudi 4 décembre 2014

Atelier doctotants
Hitting Sets for Disks via LocalSearch
Norbert Bus, LIGM, ESIEE
Abstract: Computing smallsized hitting sets for geometric objects is a basic combinatorial problem in geometry. In this talk we present new results on a fundamental problem: given a set of disks D and a set of points P in the plane, compute a smallsized subset of P that hits all the disks in D. We present an improved approximation algorithm for this problem, that computes an (8+\epsilon)approximation in O(n^2.34) time. The main result is an improved analysis for the approximation factor of the broadly applicable localsearch technique along with an efficient algorithm for it.
Development of imaging techniques for mucociliary measurement
Élodie Puybareau, LIGM, ESIEE
Abstract: Mucociliary epuration is a fundamental mechanism of defense against environmental attacks (allergens, pollution, microorganisms...), based on the capacity of ciliated cells to beat efficiently. Alterations of the epuration capacity lead to ongoing diseases such as chronic sinusitis. Today the only way to assess the correct motion of cilia is to perform a biopsy, to record it under a microscope with a highspeed camera and manually track cilia to estimate frequency of beating.
Measuring the efficacy of patient treatments is therefore unreliable, and it is also difficult to organize a real followup of chronic pathologies. Our aim is to develop a tool which can be used either in vivo or in vitro to measure epuration and quantify cilia motion.
The first step of our work was to automatize the existing method of frequency estimation on cilia sequences. We managed to provide an automatic estimation of ciliary beating frequency which was validated by comparison with ground truth.
A generalization of the wellcomposedness to dimension n
Nicolas Boutry, LIGM, EPITA, ESIEE
Abstract: The connectivities paradox of Rosenfeld shows that a simple close curve does not always separate the plan into two connected components in digital topology. For this reason, Latecki introduced the class of wellcomposed sets, i.e., without any critical configurations, with many useful topological properties.
Effectively, from a theoretical point of view, these sets satisfy the Jordan theorem and solve the Rosenfeld paradox. Moreover, their connected components are the same whatever the chosen connectivities: we say that "the connectivities of wellcomposed sets are equivalent". Finally, the connected components of the boundary of their continuous analog are (n1)manifolds (n is the dimension of the space).
From a practical point of view, these sets make the Euler characteristic locally computable, simplify graph structures resulting from skeleton algorithms, simplify the thinning algorithms, and make the graph structures resulting from the Tree of Shape of Shapes welldefined.
So, we will expose a definition of the "wellcomposed" in any dimension using very simple mathematical tools as blocks, antagonists, and a formalisation of the critical configurations in dimension n. Then we will follow our presentation with some crucial theorems linked to the very strong property of the wellcomposed sets: their connectivities are equivalent.
Also, after a short remind of the wellcomposedness for grayvalued images, we will introduce our generalization of the characterization of wellcomposedness to images of dimension n, identifying easily the wellcomposed nature of any given image.
Finally, after a brief description of two algorithms we developed recently to make images wellcomposed, the first with interpolation and the second without interpolation, we will show some results obtained on the laplacian of 3D M.R.I. brain images, demonstrating that the contours of the objects are effectively 2manifolds.
A Markov Random Field Formulation of Facade Parsing with Occlusions
Mateusz Koziński, LIGM, IMAGINE, ENPC
Abstract: We present a new shape prior formalism for the segmentation of rectified facade images. It combines the simplicity of split grammars with unprecedented expressive power: the capability of encoding simultaneous alignment in two dimensions, facade occlusions and irregular boundaries between facade elements. Our method simultaneously segments the visible and occluding objects, and recovers the structure of the occluded facade. We formulate the task of finding the most likely image segmentation conforming to a prior of the proposed form as a MAPMRF problem over the standard 4connected pixel grid with hard constraints on the classes of neighboring pixels, and propose an efficient optimization algorithm for solving it. We demonstrate stateoftheart results on a number of facade segmentation datasets.

Mardi 2 décembre 2014

Algèbre Géométrique et Informatique Graphique
Laurent Fuchs, XLIMSIC, Université de Poitiers
Résumé : Cet exposé présente l'algèbre géométrique qui est une alternative à la représentation usuelle des objets géométriques et de leurs transformations. Ce formalisme s'est particulièrement développé en informatique graphique depuis une quinzaine d'années. Le choix de la représentation des objets géométriques est une question importante en informatique graphique. De ce choix dépend la façon dont sont développés les algorithmes de manipulation des objets géométriques et les transformations géométriques que l'on peut leur appliquer. Il est bien connu que l'utilisation de l'algèbre linéaire se révèle mal adaptée dans certaines situations. Pour contourner les problèmes rencontrés, différentes approches ont été proposées ; les coordonnées homogènes, les coordonnées de Plücker, les quaternions, les espaces de Grassmann etc. Aucune de ces approches ne peut, à elle seule, exprimer la totalité des manipulations géométriques nécessaires. Cela oblige, à d'incessants allerretour pour la représentation d'un même objet géométrique et les connexions entre ces approches ne sont pas toutes évidentes. Il semble que l'on ait à faire à un agrégat de formalismes dans lequel on peine parfois à s'y retrouver. L'algèbre géométrique intègre et étend en dimension ces différents formalismes (coordonnées de Plücker, quaternions, nombres complexes, produit vectoriel). On obtient ainsi un langage de haut niveau pour décrire les algorithmes géométriques. Cette algèbre est une interprétation géométrique de l'algèbre de Clifford. Ce point de vue géométrique sur l'algèbre de Clifford a été réactivé par le physicien américain D. Hestenes dans les années 60. Son intérêt en informatique graphique n'a cessé d'être démontré depuis une quinzaine d'années. L'idée est de développer un "calcul" (une algèbre) géométrique afin de faire le lien entre le langage synthétique de la géométrie (où l'on parle de points, de droites, de cercles, etc.) et le langage analytique de la géométrie (où l'on exprime un calcul sur des coordonnées). De cette façon, on peut utiliser les objets géométriques (directions, points, droites, plans, cercles et sphères) comme des primitives de calcul. La principale difficulté de mise en oeuvre de l'algèbre géométrique vient de la dimension de l'espace vectoriel utilisé. Il existe cependant des méthodes efficaces pour l'implanter.

Jeudi 20 novembre 2014

Delaunay triangulations in manifolds
Arijit Ghosh, MaxPlanck Institute for Informatik, Saarbrucken
Abstract: Delaunay has shown that the Delaunay complex of a finite set of points P of Euclidean space R^m is a triangulation of P, provided that P satisfies a mild genericity property. Voronoi diagrams and Delaunay complexes can be defined for arbitrary Riemannian manifolds. However, Delaunay's genericity assumption no longer guarantees that the Delaunay complex will be a triangulation; stronger assumptions on P are required. A natural one is to assume that P is sufficiently dense. Although partial results in this direction have been obtained (or claimed), we show that, for manifolds of dimension greater than 2, sample density alone is insufficient to ensure that the Delaunay complex is a triangulation. We will also discuss an approach to work around this problem and somehow make Delaunay triangulation work.
This is a joint work with JeanDaniel Boissonnat (Geometrica, INRIA) and Ramsay Dyer (Johann Bernoulli Institute, University of Groningen).

Mardi 4 novembre 2014

Analysis methods for mapping cortical anatomy in vivo at 7 Tesla
PierreLouis Bazin, Max Planck Institute
Abstract: open pdf

Jeudi 16 octobre 2014

Image segmentation with multiregion level set evolution and level set trees
Anastasia Dubrovina, Technion, Israel
Abstract: In this talk I will present two topics.
In the first part, I will describe a method for segmenting an image into an arbitrary number of regions using an axiomatic variational approach. In the suggested framework, the segmentation is performed by the level set evolution. Yet, contrarily to most existing methods, here, multiple regions are represented by a single nonnegative level set function. The level set function evolution is efficiently executed through the Voronoi Implicit Interface Method for multiphase interface evolution. The proposed method allows to incorporate various generic region appearance models, while avoiding metrication errors, and obtains accurate segmentation results for various natural 2D and 3D images.
In the second part of the talk I will present an efficient method for precise computation of imageaware geodesic distances used in image editing algorithms. It exploits the connection between image representation as a mapping from a Cartesian grid and as a collection of its level sets, organized into a graph structure. The distance computation is reformulated in the domain of the image level sets, where it can be calculated without introducing approximation errors, which are unavoidable when working in the image domain. Advantages of the proposed approach will be demonstrated for image segmentation application.

Jeudi 16 octobre 2014

Ranking with HighOrder and Missing Information
Pawan Kumar, Ecole Centrale Paris
Abstract: Ranking visual samples according to their relevance to a query plays a key role in many applications, such as object detection and action classification. In this talk, I will present two maxmargin frameworks for learning to rank, which are based on APSVM. The first framework allows us to incorporate highorder information (for example, contextual cues), while the second framework allows us to learn with weakly supervised samples. I will also briefly discuss how the APSVM learning can be speededup significantly, which makes it a viable alternative to the commonly used 01 SVM.
