<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>David Vázquez</title><link>https://david-vazquez.com/</link><atom:link href="https://david-vazquez.com/index.xml" rel="self" type="application/rss+xml"/><description>David Vázquez</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Tue, 07 Apr 2026 00:00:00 +0000</lastBuildDate><item><title>Bio</title><link>https://david-vazquez.com/bio/</link><pubDate>Tue, 07 Apr 2026 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/bio/</guid><description/></item><item><title>Contact</title><link>https://david-vazquez.com/contact/</link><pubDate>Tue, 07 Apr 2026 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/contact/</guid><description/></item><item><title>Experience</title><link>https://david-vazquez.com/experience/</link><pubDate>Tue, 07 Apr 2026 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/experience/</guid><description/></item><item><title>Projects</title><link>https://david-vazquez.com/projects/</link><pubDate>Tue, 07 Apr 2026 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/projects/</guid><description/></item><item><title>Publications</title><link>https://david-vazquez.com/publications/</link><pubDate>Tue, 07 Apr 2026 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publications/</guid><description/></item><item><title>We're hiring researchers and engineers</title><link>https://david-vazquez.com/news/hiring/</link><pubDate>Tue, 07 Apr 2026 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/news/hiring/</guid><description>&lt;p&gt;The Foundational AI Research team at ServiceNow Research is hiring Senior Research Engineers and Scientists in Montreal. Research areas include AI agents, LLMs, reinforcement learning, time series analysis, and security. &lt;a href="https://jobs.smartrecruiters.com/ServiceNow/744000048939498" target="_blank" rel="noopener"&gt;Apply here&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Presenting at ICLR 2026 in Rio de Janeiro</title><link>https://david-vazquez.com/news/iclr-2026/</link><pubDate>Wed, 01 Apr 2026 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/news/iclr-2026/</guid><description>&lt;p&gt;Our team presented multiple papers at ICLR 2026 in Rio de Janeiro, including work on multimodal models, web agents, and enterprise AI benchmarks.&lt;/p&gt;</description></item><item><title>Apriel-1.5-OpenReasoner: RL Post-Training for General-Purpose and Efficient Reasoning</title><link>https://david-vazquez.com/publication/pardinas-2026-apriel/</link><pubDate>Thu, 01 Jan 2026 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/pardinas-2026-apriel/</guid><description/></item><item><title>StarFlow: Generating Structured Workflow Outputs from Sketch Images</title><link>https://david-vazquez.com/publication/bechard-2026-starflow/</link><pubDate>Thu, 01 Jan 2026 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/bechard-2026-starflow/</guid><description/></item><item><title>VectorGym: A Multitask Benchmark for SVG Code Generation, Sketching, and Editing</title><link>https://david-vazquez.com/publication/rodriguez-2026-vectorgym/</link><pubDate>Thu, 01 Jan 2026 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/rodriguez-2026-vectorgym/</guid><description/></item><item><title>WildSVG: Towards Reliable SVG Generation Under Real-Word Conditions</title><link>https://david-vazquez.com/publication/terral-2026-wildsvg/</link><pubDate>Thu, 01 Jan 2026 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/terral-2026-wildsvg/</guid><description/></item><item><title>AI Tools for Indigenous Languages</title><link>https://david-vazquez.com/project/indigenous-languages/</link><pubDate>Sat, 01 Nov 2025 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/project/indigenous-languages/</guid><description>&lt;p&gt;An NSERC Discovery Grant funded project developing multimodal AI tools for underrepresented languages, with a focus on the Matsigenka language of Peru and Inuktitut in northern Canada. The project follows OCAP and TCPS 2 data sovereignty principles and involves community partners including Tejiendo Puentes en Salud, Ayni Desarrollo, Heritage Lab, and CECONAMA. Conducted through Polytechnique Montréal in collaboration with MILA.&lt;/p&gt;</description></item><item><title>NSERC Discovery Grant awarded for Indigenous language AI tools</title><link>https://david-vazquez.com/news/nserc-grant/</link><pubDate>Sat, 01 Nov 2025 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/news/nserc-grant/</guid><description>&lt;p&gt;Awarded an NSERC Discovery Grant funding research on multimodal AI translation and literacy tools for the Matsigenka and Inuktitut communities, conducted through Polytechnique Montréal in collaboration with MILA.&lt;/p&gt;</description></item><item><title>AlignVLM accepted at NeurIPS 2025</title><link>https://david-vazquez.com/news/alignvlm-neurips/</link><pubDate>Mon, 01 Sep 2025 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/news/alignvlm-neurips/</guid><description>&lt;p&gt;Our paper &amp;ldquo;AlignVLM: Bridging Vision and Language Latent Spaces for Multimodal Understanding&amp;rdquo; has been accepted at NeurIPS 2025.&lt;/p&gt;</description></item><item><title>Appointed Adjunct Professor at Polytechnique Montréal</title><link>https://david-vazquez.com/news/polytechnique-appointment/</link><pubDate>Mon, 01 Sep 2025 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/news/polytechnique-appointment/</guid><description>&lt;p&gt;Joined Polytechnique Montréal as Adjunct Professor affiliated with MILA, co supervising doctoral students and leading research on multimodal AI tools for Indigenous language preservation.&lt;/p&gt;</description></item><item><title>EnterpriseOps-Gym</title><link>https://david-vazquez.com/project/enterpriseops-gym/</link><pubDate>Tue, 01 Apr 2025 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/project/enterpriseops-gym/</guid><description>&lt;p&gt;EnterpriseOps-Gym features 1,150 expert designed tasks across 8 interconnected enterprise domains, with persistent state, strict verification logic, and policy aware execution requirements. It tests whether AI agents can handle domain expertise, not just general reasoning.&lt;/p&gt;</description></item><item><title>EnterpriseOps-Gym Released</title><link>https://david-vazquez.com/news/enterpriseops-gym/</link><pubDate>Tue, 01 Apr 2025 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/news/enterpriseops-gym/</guid><description>&lt;p&gt;Our new benchmark for evaluating stateful agentic planning in enterprise settings is now public. 1,150 tasks across 8 domains. &lt;a href="https://arxiv.org/abs/2505.00000" target="_blank" rel="noopener"&gt;Paper&lt;/a&gt; · &lt;a href="https://github.com/ServiceNow/EnterpriseOps-Gym" target="_blank" rel="noopener"&gt;Code&lt;/a&gt;&lt;/p&gt;</description></item><item><title>Apriel Model Family</title><link>https://david-vazquez.com/project/apriel/</link><pubDate>Sat, 01 Mar 2025 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/project/apriel/</guid><description>&lt;p&gt;The Apriel family of open language models developed at ServiceNow Research, including base models (Apriel 1.5, 1.6), reasoning models (AprielReasoner), and safety models (AprielGuard, an 8B parameter guardian model).&lt;/p&gt;</description></item><item><title>StarVector accepted at CVPR 2025</title><link>https://david-vazquez.com/news/starvector-cvpr/</link><pubDate>Sat, 01 Mar 2025 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/news/starvector-cvpr/</guid><description>&lt;p&gt;Our paper &amp;ldquo;StarVector: Generating Scalable Vector Graphics Code from Images&amp;rdquo; has been accepted at CVPR 2025. Former intern Juan A. Rodriguez co-founded &lt;a href="https://quiver.ai" target="_blank" rel="noopener"&gt;QuiverAI&lt;/a&gt; based on this research, raising an $8.3M seed round led by Andreessen Horowitz.&lt;/p&gt;</description></item><item><title>BigDocs accepted at ICLR 2025</title><link>https://david-vazquez.com/news/bigdocs-iclr/</link><pubDate>Wed, 22 Jan 2025 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/news/bigdocs-iclr/</guid><description>&lt;p&gt;Our paper &amp;ldquo;BigDocs: An Open Dataset for Training Multimodal Models on Document and Code Tasks&amp;rdquo; has been accepted at ICLR 2025. &lt;a href="https://arxiv.org/abs/2412.04626" target="_blank" rel="noopener"&gt;Paper&lt;/a&gt; · &lt;a href="https://github.com/ServiceNow/BigDocs" target="_blank" rel="noopener"&gt;Code&lt;/a&gt;&lt;/p&gt;</description></item><item><title>AgentAda: Skill-Adaptive Data Analytics for Tailored Insight Discovery</title><link>https://david-vazquez.com/publication/abaskohi-2025-agentada/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/abaskohi-2025-agentada/</guid><description/></item><item><title>AlignVLM: Bridging Vision and Language Latent Spaces for Multimodal Document Understanding</title><link>https://david-vazquez.com/publication/masry-2025-alignvlm/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/masry-2025-alignvlm/</guid><description/></item><item><title>BigCharts-R1: Enhanced Chart Reasoning with Visual Reinforcement Finetuning</title><link>https://david-vazquez.com/publication/masry-2025-bigcharts/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/masry-2025-bigcharts/</guid><description/></item><item><title>BigDocs</title><link>https://david-vazquez.com/project/bigdocs/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/project/bigdocs/</guid><description>&lt;p&gt;BigDocs is a large scale, open, and permissively licensed dataset for training multimodal models on document understanding and code generation tasks. Published at ICLR 2025.&lt;/p&gt;</description></item><item><title>BigDocs: An Open Dataset for Training Multimodal Models on Document and Code Tasks</title><link>https://david-vazquez.com/publication/rodriguez-2025-bigdocs/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/rodriguez-2025-bigdocs/</guid><description/></item><item><title>Distilling Specialized Orders for Visual Generation</title><link>https://david-vazquez.com/publication/pramanik-2025-distilling/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/pramanik-2025-distilling/</guid><description/></item><item><title>Grounding Computer Use Agents on Human Demonstrations</title><link>https://david-vazquez.com/publication/feizi-2025-grounding/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/feizi-2025-grounding/</guid><description/></item><item><title>Intent Discovery using Large Language Models</title><link>https://david-vazquez.com/publication/garcia-2025-intent/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/garcia-2025-intent/</guid><description/></item><item><title>Rendering-Aware Reinforcement Learning for Vector Graphics Generation</title><link>https://david-vazquez.com/publication/rodriguez-2025-rendering/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/rodriguez-2025-rendering/</guid><description/></item><item><title>StarVector: Generating Scalable Vector Graphics Code from Images and Text</title><link>https://david-vazquez.com/publication/rodriguez-2025-starvector/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/rodriguez-2025-starvector/</guid><description/></item><item><title>UI-Vision: A Desktop-Centric GUI Benchmark for Visual Perception and Interaction</title><link>https://david-vazquez.com/publication/nayak-2025-uivision/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/nayak-2025-uivision/</guid><description/></item><item><title>WebMMU: A Benchmark for Multimodal Multilingual Website Understanding and Code Generation</title><link>https://david-vazquez.com/publication/awal-2025-webmmu/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/awal-2025-webmmu/</guid><description/></item><item><title>Improved Training Set Selection for Semi-Supervised Learning</title><link>https://david-vazquez.com/publication/laradji-2024-improved/</link><pubDate>Sun, 01 Dec 2024 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/laradji-2024-improved/</guid><description/></item><item><title>WorkArena accepted at ICML 2024</title><link>https://david-vazquez.com/news/workarena-icml/</link><pubDate>Mon, 08 Jul 2024 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/news/workarena-icml/</guid><description>&lt;p&gt;Our paper &amp;ldquo;WorkArena: How Capable are Web Agents at Solving Common Knowledge Work Tasks?&amp;rdquo; has been accepted at ICML 2024. &lt;a href="https://arxiv.org/abs/2403.07718" target="_blank" rel="noopener"&gt;Paper&lt;/a&gt; · &lt;a href="https://github.com/ServiceNow/WorkArena" target="_blank" rel="noopener"&gt;Code&lt;/a&gt;&lt;/p&gt;</description></item><item><title>WorkArena and BrowserGym</title><link>https://david-vazquez.com/project/workarena/</link><pubDate>Mon, 01 Jul 2024 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/project/workarena/</guid><description>&lt;p&gt;WorkArena is a benchmark of tasks based on the ServiceNow platform that measures how well web agents can perform common knowledge work. BrowserGym provides a rich environment for designing and evaluating such agents with multimodal observations and a comprehensive action set. Published at ICML 2024.&lt;/p&gt;</description></item><item><title>A Multimodal Class-Incremental Learning Benchmark for Classification Tasks</title><link>https://david-vazquez.com/publication/dalessandro-2024-a/</link><pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/dalessandro-2024-a/</guid><description/></item><item><title>CADet: Fully Self-Supervised Out-of-Distribution Detection with Contrastive Learning</title><link>https://david-vazquez.com/publication/guille-escuret-2024-cadet/</link><pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/guille-escuret-2024-cadet/</guid><description/></item><item><title>GEO-Bench: Toward Foundation Models for Earth Monitoring</title><link>https://david-vazquez.com/publication/lacoste-2024-geo/</link><pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/lacoste-2024-geo/</guid><description/></item><item><title>Group Robust Classification without Any Group Information</title><link>https://david-vazquez.com/publication/tsirigotis-2024-group/</link><pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/tsirigotis-2024-group/</guid><description/></item><item><title>InsightBench: Evaluating Business Analytics Agents through Multi-Step Insight Generation</title><link>https://david-vazquez.com/publication/sahu-2024-insightbench/</link><pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/sahu-2024-insightbench/</guid><description/></item><item><title>RepLiQA: A Question-Answering Dataset for Benchmarking LLMs on Unseen Reference Content</title><link>https://david-vazquez.com/publication/monteiro-2024-repliqa/</link><pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/monteiro-2024-repliqa/</guid><description/></item><item><title>Towards Good Validation Metrics for Generative Models in Offline Model-Based Optimisation</title><link>https://david-vazquez.com/publication/beckham-2024-towards/</link><pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/beckham-2024-towards/</guid><description/></item><item><title>WorkArena: How Capable Are Web Agents at Solving Common Knowledge Work Tasks?</title><link>https://david-vazquez.com/publication/drouin-2024-workarena/</link><pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/drouin-2024-workarena/</guid><description/></item><item><title>XC-Cache: Cross-Attending to Cached Context for Efficient LLM Inference</title><link>https://david-vazquez.com/publication/monteiro-2024-xc/</link><pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/monteiro-2024-xc/</guid><description/></item><item><title>Anomaly Detection using Graph Neural Networks</title><link>https://david-vazquez.com/publication/taslakian-2023-anomaly/</link><pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/taslakian-2023-anomaly/</guid><description/></item><item><title>Automatic Data Augmentation Learning using Bilevel Optimization for Histopathological Images</title><link>https://david-vazquez.com/publication/mounsaveng-2023-automatic/</link><pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/mounsaveng-2023-automatic/</guid><description/></item><item><title>Capture the Flag: Uncovering Data Insights with Large Language Models</title><link>https://david-vazquez.com/publication/laradji-2023-capture/</link><pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/laradji-2023-capture/</guid><description/></item><item><title>Constraining Representations Yields Models That Know What They Don't Know</title><link>https://david-vazquez.com/publication/monteiro-2023-constraining/</link><pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/monteiro-2023-constraining/</guid><description/></item><item><title>Expecting the Unexpected: Towards Broad Out-of-Distribution Detection</title><link>https://david-vazquez.com/publication/guille-escuret-2023-expecting/</link><pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/guille-escuret-2023-expecting/</guid><description/></item><item><title>FigGen: Text to Scientific Figure Generation</title><link>https://david-vazquez.com/publication/rodriguez-2023-figgen/</link><pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/rodriguez-2023-figgen/</guid><description/></item><item><title>Flaky Performances When Pretraining on Relational Databases</title><link>https://david-vazquez.com/publication/liu-2023-flaky/</link><pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/liu-2023-flaky/</guid><description/></item><item><title>Improving Generalization in Task-Oriented Dialogues with Workflows and Action Plans</title><link>https://david-vazquez.com/publication/raimondo-2023-improving/</link><pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/raimondo-2023-improving/</guid><description/></item><item><title>IntentGPT: Few-Shot Intent Discovery with Large Language Models</title><link>https://david-vazquez.com/publication/rodriguez-2023-intentgpt/</link><pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/rodriguez-2023-intentgpt/</guid><description/></item><item><title>Knowledge Hypergraph Embedding Meets Relational Algebra</title><link>https://david-vazquez.com/publication/fatemi-2023-knowledge/</link><pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/fatemi-2023-knowledge/</guid><description/></item><item><title>Language Decision Transformers with Exponential Tilt for Interactive Text Environments</title><link>https://david-vazquez.com/publication/gontier-2023-language/</link><pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/gontier-2023-language/</guid><description/></item><item><title>Leveraging Human Preferences to Master Poetry</title><link>https://david-vazquez.com/publication/pardinas-2023-leveraging/</link><pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/pardinas-2023-leveraging/</guid><description/></item><item><title>Multilingual Code Retrieval without Paired Data: New Datasets and Benchmarks</title><link>https://david-vazquez.com/publication/monteiro-2023-multilingual/</link><pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/monteiro-2023-multilingual/</guid><description/></item><item><title>OC-NMN: Object-Centric Compositional Neural Module Network for Generative Visual Analogical Reasoning</title><link>https://david-vazquez.com/publication/assouel-2023-oc/</link><pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/assouel-2023-oc/</guid><description/></item><item><title>OCR-VQGAN: Taming Text-within-Image Generation</title><link>https://david-vazquez.com/publication/rodriguez-2023-ocr/</link><pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/rodriguez-2023-ocr/</guid><description/></item><item><title>The Unsolved Challenges of LLMs as Generalist Web Agents: A Case Study</title><link>https://david-vazquez.com/publication/assouel-2023-the/</link><pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/assouel-2023-the/</guid><description/></item><item><title>TK-KNN: A Balanced Distance-Based Pseudo Labeling Approach for Semi-Supervised Intent Classification</title><link>https://david-vazquez.com/publication/botzer-2023-tk/</link><pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/botzer-2023-tk/</guid><description/></item><item><title>3rd Continual Learning Workshop Challenge on Egocentric Category and Instance Level Object Understanding</title><link>https://david-vazquez.com/publication/pellegrini-2022-3rd/</link><pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/pellegrini-2022-3rd/</guid><description/></item><item><title>A Probabilistic Perspective on Reinforcement Learning via Supervised Learning</title><link>https://david-vazquez.com/publication/piche-2022-a/</link><pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/piche-2022-a/</guid><description/></item><item><title>A Survey of Self-Supervised and Few-Shot Object Detection</title><link>https://david-vazquez.com/publication/huang-2022-survey/</link><pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/huang-2022-survey/</guid><description/></item><item><title>Constraining Low-Level Representations to Define Effective Confidence Scores</title><link>https://david-vazquez.com/publication/monteiro-2022-constraining/</link><pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/monteiro-2022-constraining/</guid><description/></item><item><title>Contrastive Self-Supervision Defines General-Purpose Similarity Functions</title><link>https://david-vazquez.com/publication/guille-escuret-2022-contrastive/</link><pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/guille-escuret-2022-contrastive/</guid><description/></item><item><title>CVPR 2020 Continual Learning in Computer Vision Competition: Approaches, Results, Current Challenges and Future Directions</title><link>https://david-vazquez.com/publication/lomonaco-2022-cvpr/</link><pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/lomonaco-2022-cvpr/</guid><description/></item><item><title>Data Augmentation for Intent Classification with Off-the-Shelf Large Language Models</title><link>https://david-vazquez.com/publication/sahu-2022-data/</link><pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/sahu-2022-data/</guid><description/></item><item><title>Exploring Validation Metrics for Offline Model-Based Optimisation with Diffusion Models</title><link>https://david-vazquez.com/publication/beckham-2022-exploring/</link><pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/beckham-2022-exploring/</guid><description/></item><item><title>Flaky Performances When Pretraining on Relational Databases</title><link>https://david-vazquez.com/publication/liu-2022-flaky/</link><pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/liu-2022-flaky/</guid><description/></item><item><title>Implicit Offline Reinforcement Learning via Supervised Learning</title><link>https://david-vazquez.com/publication/piche-2022-implicit/</link><pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/piche-2022-implicit/</guid><description/></item><item><title>Multi-Label Iterated Learning for Image Classification with Label Ambiguity</title><link>https://david-vazquez.com/publication/rajeswar-2022-multi/</link><pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/rajeswar-2022-multi/</guid><description/></item><item><title>OCIM: Object-Centric Compositional Imagination for Visual Abstract Reasoning</title><link>https://david-vazquez.com/publication/assouel-2022-ocim/</link><pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/assouel-2022-ocim/</guid><description/></item><item><title>Overcoming Challenges in Leveraging GANs for Few-Shot Data Augmentation</title><link>https://david-vazquez.com/publication/beckham-2022-overcoming/</link><pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/beckham-2022-overcoming/</guid><description/></item><item><title>Sequoia: A Software Framework to Unify Continual Learning Research</title><link>https://david-vazquez.com/publication/normandin-2022-sequoia/</link><pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/normandin-2022-sequoia/</guid><description/></item><item><title>Touch-Based Curiosity for Sparse-Reward Tasks</title><link>https://david-vazquez.com/publication/rajeswar-2022-haptics/</link><pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/rajeswar-2022-haptics/</guid><description/></item><item><title>Workflow Discovery from Dialogues in the Low Data Regime</title><link>https://david-vazquez.com/publication/laradji-2022-workflow/</link><pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/laradji-2022-workflow/</guid><description/></item><item><title>3D Perception with Slanted Stixels on GPU</title><link>https://david-vazquez.com/publication/hernandez-juarez-2021-3d/</link><pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/hernandez-juarez-2021-3d/</guid><description/></item><item><title>A Deep Learning Localization Method for Measuring Abdominal Muscle Dimensions in Ultrasound Images</title><link>https://david-vazquez.com/publication/saleh-2021-a/</link><pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/saleh-2021-a/</guid><description/></item><item><title>A Weakly Supervised Consistency-Based Learning Method for COVID-19 Segmentation in CT Images</title><link>https://david-vazquez.com/publication/laradji-2021-a/</link><pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/laradji-2021-a/</guid><description/></item><item><title>Beyond Trivial Counterfactual Explanations with Diverse Valuable Explanations</title><link>https://david-vazquez.com/publication/rodriguez-2021-beyond/</link><pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/rodriguez-2021-beyond/</guid><description/></item><item><title>Decoupling Anomaly Discrimination and Representation Learning: Self-Supervised Learning for Anomaly Detection on Attributed Graph</title><link>https://david-vazquez.com/publication/corsini-2021-self/</link><pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/corsini-2021-self/</guid><description/></item><item><title>Learning Data Augmentation with Online Bilevel Optimization for Image Classification</title><link>https://david-vazquez.com/publication/mounsaveng-2021-learning/</link><pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/mounsaveng-2021-learning/</guid><description/></item><item><title>Seasonal Contrast: Unsupervised Pre-Training from Uncurated Remote Sensing Data</title><link>https://david-vazquez.com/publication/manas-2021-seasonal/</link><pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/manas-2021-seasonal/</guid><description/></item><item><title>SSR: Semi-Supervised Soft Rasterizer for Single-View 2D to 3D Reconstruction</title><link>https://david-vazquez.com/publication/laradji-2021-ssr/</link><pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/laradji-2021-ssr/</guid><description/></item><item><title>Toward Foundation Models for Earth Monitoring: Proposal for a Climate Change Benchmark</title><link>https://david-vazquez.com/publication/lacoste-2021-toward/</link><pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/lacoste-2021-toward/</guid><description/></item><item><title>Weakly Supervised Underwater Fish Segmentation using Affinity LCFCN</title><link>https://david-vazquez.com/publication/laradji-2021-weakly/</link><pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/laradji-2021-weakly/</guid><description/></item><item><title>Counting Objects in Images Based on Approximate Locations</title><link>https://david-vazquez.com/publication/laradji-2020-countingpat/</link><pubDate>Sat, 01 Feb 2020 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/laradji-2020-countingpat/</guid><description/></item><item><title>A Realistic Fish-Habitat Dataset to Evaluate Algorithms for Underwater Visual Analysis</title><link>https://david-vazquez.com/publication/saleh-2020-a/</link><pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/saleh-2020-a/</guid><description/></item><item><title>A Weakly Supervised Region-Based Active Learning Method for COVID-19 Segmentation in CT Images</title><link>https://david-vazquez.com/publication/laradji-2020-a/</link><pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/laradji-2020-a/</guid><description/></item><item><title>Affinity LCFCN: Learning to Segment Fish with Weak Supervision</title><link>https://david-vazquez.com/publication/laradji-2020-affinity/</link><pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/laradji-2020-affinity/</guid><description/></item><item><title>Counting Cows: Tracking Illegal Cattle Ranching from High-Resolution Satellite Imagery</title><link>https://david-vazquez.com/publication/laradji-2020-counting/</link><pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/laradji-2020-counting/</guid><description/></item><item><title>Generating Virtual Images for Promoting Visual Artificial Intelligence</title><link>https://david-vazquez.com/publication/wang-2020-generating/</link><pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/wang-2020-generating/</guid><description/></item><item><title>Instance Segmentation with Point Supervision</title><link>https://david-vazquez.com/publication/laradji-2020-instance/</link><pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/laradji-2020-instance/</guid><description/></item><item><title>LOOC: Localize Overlapping Objects with Count Supervision</title><link>https://david-vazquez.com/publication/laradji-2020-looc/</link><pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/laradji-2020-looc/</guid><description/></item><item><title>Online Fast Adaptation and Knowledge Accumulation: A New Approach to Continual Learning</title><link>https://david-vazquez.com/publication/caccia-2020-online/</link><pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/caccia-2020-online/</guid><description/></item><item><title>Online Fast Adaptation and Knowledge Accumulation: A New Approach to Continual Learning</title><link>https://david-vazquez.com/publication/caccia-2020-osaka/</link><pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/caccia-2020-osaka/</guid><description/></item><item><title>Pix2Shape: Towards Unsupervised Learning of 3D Scenes from Images Using a View-Based Representation</title><link>https://david-vazquez.com/publication/rajeswar-2020-pix2shape/</link><pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/rajeswar-2020-pix2shape/</guid><description/></item><item><title>Proposal-Based Instance Segmentation with Point Supervision</title><link>https://david-vazquez.com/publication/laradji-2020-proposal/</link><pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/laradji-2020-proposal/</guid><description/></item><item><title>Synbols: Probing Learning Algorithms with Synthetic Datasets</title><link>https://david-vazquez.com/publication/lacoste-2020-synbols/</link><pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/lacoste-2020-synbols/</guid><description/></item><item><title>Adversarial Learning of General Transformations for Data Augmentation</title><link>https://david-vazquez.com/publication/mounsaveng-2019-adversarial/</link><pubDate>Tue, 01 Jan 2019 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/mounsaveng-2019-adversarial/</guid><description/></item><item><title>Class-Based Styling: Real-Time Localized Style Transfer with Semantic Segmentation</title><link>https://david-vazquez.com/publication/kurzman-2019-class/</link><pubDate>Tue, 01 Jan 2019 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/kurzman-2019-class/</guid><description/></item><item><title>Context-Aware Visual Compatibility Prediction</title><link>https://david-vazquez.com/publication/cucurull-2019-context/</link><pubDate>Tue, 01 Jan 2019 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/cucurull-2019-context/</guid><description/></item><item><title>Fourier-CPPNs for Image Synthesis</title><link>https://david-vazquez.com/publication/tesfaldet-2019-fourier/</link><pubDate>Tue, 01 Jan 2019 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/tesfaldet-2019-fourier/</guid><description/></item><item><title>Knowledge Hypergraphs: Prediction Beyond Binary Relations</title><link>https://david-vazquez.com/publication/fatemi-2019-knowledge/</link><pubDate>Tue, 01 Jan 2019 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/fatemi-2019-knowledge/</guid><description/></item><item><title>Slanted Stixels: A Way to Represent Steep Streets</title><link>https://david-vazquez.com/publication/hernandez-juarez-2019-slanted/</link><pubDate>Tue, 01 Jan 2019 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/hernandez-juarez-2019-slanted/</guid><description/></item><item><title>Where Are the Masks: Instance Segmentation with Image-Level Supervision</title><link>https://david-vazquez.com/publication/laradji-2019-where/</link><pubDate>Tue, 01 Jan 2019 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/laradji-2019-where/</guid><description/></item><item><title>Data for Training Models, Domain Adaptation</title><link>https://david-vazquez.com/publication/lopez-2018-data/</link><pubDate>Mon, 01 Jan 2018 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/lopez-2018-data/</guid><description/></item><item><title>Environmental Perception for Intelligent Vehicles</title><link>https://david-vazquez.com/publication/armingol-2018-environmental/</link><pubDate>Mon, 01 Jan 2018 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/armingol-2018-environmental/</guid><description/></item><item><title>Learning to Remove Rain in Traffic Surveillance by using Synthetic Data</title><link>https://david-vazquez.com/publication/bahnsen-2018-learning/</link><pubDate>Mon, 01 Jan 2018 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/bahnsen-2018-learning/</guid><description/></item><item><title>Where Are the Blobs: Counting by Localization with Point Supervision</title><link>https://david-vazquez.com/publication/laradji-2018-blobs/</link><pubDate>Mon, 01 Jan 2018 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/laradji-2018-blobs/</guid><description/></item><item><title>A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images</title><link>https://david-vazquez.com/publication/vazquez-2017-a/</link><pubDate>Sun, 01 Jan 2017 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/vazquez-2017-a/</guid><description/></item><item><title>GPU-Accelerated Real-Time Stixel Computation</title><link>https://david-vazquez.com/publication/hernandez-juarez-2017-gpu/</link><pubDate>Sun, 01 Jan 2017 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/hernandez-juarez-2017-gpu/</guid><description/></item><item><title>Guest Editorial: Deep Learning in Computer Vision</title><link>https://david-vazquez.com/publication/hospedales-2017-guest/</link><pubDate>Sun, 01 Jan 2017 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/hospedales-2017-guest/</guid><description/></item><item><title>On-Board Detection of Pedestrian Intentions</title><link>https://david-vazquez.com/publication/fang-2017-on/</link><pubDate>Sun, 01 Jan 2017 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/fang-2017-on/</guid><description/></item><item><title>Semantic Segmentation of Urban Scenes via Domain Adaptation of SYNTHIA</title><link>https://david-vazquez.com/publication/ros-2017-semantic/</link><pubDate>Sun, 01 Jan 2017 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/ros-2017-semantic/</guid><description/></item><item><title>Simulation Tools</title><link>https://david-vazquez.com/publication/brazalez-2017-simulation/</link><pubDate>Sun, 01 Jan 2017 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/brazalez-2017-simulation/</guid><description/></item><item><title>Slanted Stixels: Representing San Francisco's steepest streets</title><link>https://david-vazquez.com/publication/hernandez-juarez-2017-slanted/</link><pubDate>Sun, 01 Jan 2017 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/hernandez-juarez-2017-slanted/</guid><description/></item><item><title>The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation</title><link>https://david-vazquez.com/publication/jegou-2017-the/</link><pubDate>Sun, 01 Jan 2017 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/jegou-2017-the/</guid><description/></item><item><title>Training My Car to See using Virtual Worlds</title><link>https://david-vazquez.com/publication/lopez-2017-training/</link><pubDate>Sun, 01 Jan 2017 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/lopez-2017-training/</guid><description/></item><item><title>Vision-Based Advanced Driver Assistance Systems</title><link>https://david-vazquez.com/publication/geronimo-2017-vision/</link><pubDate>Sun, 01 Jan 2017 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/geronimo-2017-vision/</guid><description/></item><item><title>SYNTHIA</title><link>https://david-vazquez.com/project/synthia/</link><pubDate>Wed, 01 Jun 2016 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/project/synthia/</guid><description>&lt;p&gt;SYNTHIA is a large collection of synthetic images for semantic segmentation of urban scenes, generated using a video game engine. Published at CVPR 2016 and widely adopted in the autonomous driving research community. Licensed for commercial use by Intel, Audi, Huawei, Toyota, and Samsung.&lt;/p&gt;</description></item><item><title>Comparison of Two Non-Linear Model-Based Control Strategies for Autonomous Vehicles</title><link>https://david-vazquez.com/publication/alcala-2016-comparison/</link><pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/alcala-2016-comparison/</guid><description/></item><item><title>Embedded Real-Time Stereo Estimation via Semi-Global Matching on the GPU</title><link>https://david-vazquez.com/publication/juarez-2016-embedded/</link><pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/juarez-2016-embedded/</guid><description/></item><item><title>From Virtual to Real World Visual Perception using Domain Adaptation -- The DPM as Example</title><link>https://david-vazquez.com/publication/lopez-2016-from/</link><pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/lopez-2016-from/</guid><description/></item><item><title>GPU-Based Pedestrian Detection for Autonomous Driving</title><link>https://david-vazquez.com/publication/campmany-2016-gpu/</link><pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/campmany-2016-gpu/</guid><description/></item><item><title>Hierarchical Adaptive Structural SVM for Domain Adaptation</title><link>https://david-vazquez.com/publication/xu-2016-hierarchical/</link><pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/xu-2016-hierarchical/</guid><description/></item><item><title>Node-Adapt, Path-Adapt and Tree-Adapt: Model-Transfer Domain Adaptation for Random Forest</title><link>https://david-vazquez.com/publication/mozafari-2016-node/</link><pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/mozafari-2016-node/</guid><description/></item><item><title>On-Board Object Detection: Multicue, Multimodal, and Multiview Random Forest of Local Experts</title><link>https://david-vazquez.com/publication/gonzalez-2016-on/</link><pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/gonzalez-2016-on/</guid><description/></item><item><title>Pedestrian Detection at Day/Night Time with Visible and FIR Cameras: A Comparison</title><link>https://david-vazquez.com/publication/gonzalez-2016-pedestrian/</link><pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/gonzalez-2016-pedestrian/</guid><description/></item><item><title>PixelVAE: A Latent Variable Model for Natural Images</title><link>https://david-vazquez.com/publication/gulrajani-2016-pixelvae/</link><pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/gulrajani-2016-pixelvae/</guid><description/></item><item><title>The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes</title><link>https://david-vazquez.com/publication/ros-2016-the/</link><pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/ros-2016-the/</guid><description/></item><item><title>3D-Guided Multiscale Sliding Window for Pedestrian Detection</title><link>https://david-vazquez.com/publication/gonzalez-2015-3d/</link><pubDate>Thu, 01 Jan 2015 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/gonzalez-2015-3d/</guid><description/></item><item><title>Multiview Random Forest of Local Experts Combining RGB and LIDAR Data for Pedestrian Detection</title><link>https://david-vazquez.com/publication/gonzalez-2015-multiview/</link><pubDate>Thu, 01 Jan 2015 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/gonzalez-2015-multiview/</guid><description/></item><item><title>Spatiotemporal Stacked Sequential Learning for Pedestrian Detection</title><link>https://david-vazquez.com/publication/gonzalez-2015-spatiotemporal/</link><pubDate>Thu, 01 Jan 2015 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/gonzalez-2015-spatiotemporal/</guid><description/></item><item><title>Vision-Based Offline-Online Perception Paradigm for Autonomous Driving</title><link>https://david-vazquez.com/publication/ros-2015-vision/</link><pubDate>Thu, 01 Jan 2015 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/ros-2015-vision/</guid><description/></item><item><title>Cost-Sensitive Structured SVM for Multi-Category Domain Adaptation</title><link>https://david-vazquez.com/publication/xu-2014-cost/</link><pubDate>Wed, 01 Jan 2014 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/xu-2014-cost/</guid><description/></item><item><title>Domain Adaptation of Deformable Part-Based Models</title><link>https://david-vazquez.com/publication/xu-2014-domain/</link><pubDate>Wed, 01 Jan 2014 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/xu-2014-domain/</guid><description/></item><item><title>Elektra Autonomous Vehicle</title><link>https://david-vazquez.com/project/elektra/</link><pubDate>Wed, 01 Jan 2014 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/project/elektra/</guid><description>&lt;style&gt;
.section-row {
display: flex;
flex-wrap: wrap;
gap: 2.5rem;
margin: 2.5rem 0;
align-items: flex-start;
}
.section-row.reverse { flex-direction: row-reverse; }
.section-text { flex: 1; min-width: 300px; }
.section-media { flex: 0 0 calc(45% - 1.25rem); }
@media (max-width: 1024px) {
.section-row, .section-row.reverse { flex-direction: column; }
.section-media { flex: 1 1 100%; }
}
.section-slideshow {
position: relative;
border-radius: 8px;
overflow: hidden;
background: #f5f5f5;
}
.section-slideshow-container {
position: relative;
width: 100%;
padding-bottom: 75%;
height: 0;
}
.section-slideshow-image {
position: absolute;
inset: 0;
opacity: 0;
transition: opacity 0.6s ease-in-out;
}
.section-slideshow-image.active { opacity: 1; }
.section-slideshow-image img {
width: 100%;
height: 100%;
object-fit: cover;
display: block;
}
.slideshow-nav {
position: absolute;
bottom: 1rem;
left: 50%;
transform: translateX(-50%);
display: flex;
gap: 0.5rem;
z-index: 10;
}
.slideshow-dot {
width: 10px;
height: 10px;
border-radius: 50%;
background: rgba(255,255,255,0.5);
cursor: pointer;
border: none;
transition: background 0.3s;
}
.slideshow-dot.active { background: white; }
.slideshow-caption {
position: absolute;
bottom: 2.5rem;
left: 0; right: 0;
padding: 0.5rem 1rem;
background: rgba(0,0,0,0.45);
color: white;
font-size: 0.85rem;
text-align: center;
z-index: 5;
}
.elektra-stats {
display: grid;
grid-template-columns: repeat(4, 1fr);
gap: 1.5rem;
margin: 2rem 0;
text-align: center;
}
.elektra-stat {
padding: 1.25rem;
background: #f5f5f5;
border-radius: 8px;
}
.dark .elektra-stat { background: #2a2a2a; }
.elektra-stat-value { font-size: 1.6rem; font-weight: 700; color: #333; }
.dark .elektra-stat-value { color: #eee; }
.elektra-stat-label { font-size: 0.85rem; color: #666; margin-top: 0.4rem; }
.dark .elektra-stat-label { color: #bbb; }
@media (max-width: 640px) { .elektra-stats { grid-template-columns: repeat(2, 1fr); } }
.featured-video-btn {
display: inline-flex;
align-items: center;
gap: 0.5rem;
padding: 0.875rem 1.75rem;
background: rgb(var(--color-primary-600));
color: white;
border-radius: 8px;
cursor: pointer;
font-size: 1rem;
font-weight: 600;
border: none;
margin: 1.25rem 0;
transition: background 0.2s;
}
.featured-video-btn:hover { background: rgb(var(--color-primary-700)); }
.video-links {
display: flex;
flex-wrap: wrap;
gap: 0.75rem;
margin-top: 1.5rem;
}
.video-link-btn {
display: inline-flex;
align-items: center;
gap: 0.4rem;
padding: 0.4rem 0.9rem;
border: 1px solid rgba(0,0,0,0.15);
border-radius: 6px;
font-size: 0.85rem;
font-weight: 500;
cursor: pointer;
background: none;
color: inherit;
font-family: inherit;
transition: border-color 0.2s, background 0.2s;
}
.video-link-btn:hover { border-color: rgb(var(--color-primary-500)); background: rgba(var(--color-primary-50), 0.5); }
.dark .video-link-btn { border-color: rgba(255,255,255,0.15); }
.dark .video-link-btn:hover { border-color: rgb(var(--color-primary-400)); background: rgba(255,255,255,0.05); }
.video-modal {
display: none;
position: fixed;
inset: 0;
background: rgba(0,0,0,0.8);
z-index: 1000;
align-items: center;
justify-content: center;
padding: 2rem;
}
.video-modal.active { display: flex; }
.video-modal-content {
background: white;
border-radius: 12px;
max-width: 900px;
width: 100%;
overflow: hidden;
box-shadow: 0 10px 40px rgba(0,0,0,0.3);
}
.dark .video-modal-content { background: #1a1a1a; }
.video-modal-header {
display: flex;
justify-content: space-between;
align-items: center;
padding: 1rem;
border-bottom: 1px solid #e5e5e5;
}
.dark .video-modal-header { border-bottom-color: #333; }
.video-modal-title { font-size: 1rem; font-weight: 600; margin: 0; }
.video-modal-close {
background: none;
border: none;
font-size: 1.4rem;
cursor: pointer;
color: #666;
line-height: 1;
padding: 0.25rem 0.5rem;
}
.dark .video-modal-close { color: #aaa; }
.video-modal-player {
position: relative;
padding-bottom: 56.25%;
height: 0;
}
.video-modal-player iframe {
position: absolute;
inset: 0;
width: 100%;
height: 100%;
}
&lt;/style&gt;
&lt;div id="video-modal" class="video-modal"&gt;
&lt;div class="video-modal-content"&gt;
&lt;div class="video-modal-header"&gt;
&lt;h3 class="video-modal-title" id="video-modal-title"&gt;Video&lt;/h3&gt;
&lt;button class="video-modal-close" onclick="closeVideoModal()"&gt;✕&lt;/button&gt;
&lt;/div&gt;
&lt;div class="video-modal-player"&gt;
&lt;iframe id="video-modal-player" src="" frameborder="0" allowfullscreen allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"&gt;&lt;/iframe&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;script&gt;
let slideIndex = {};
let autoPlayTimer = {};
const slideCaptions = {
project: ["Elektra autonomous vehicle platform", "Multidisciplinary team composition"],
perception: ["Real-time stereo vision processing", "3D scene reconstruction", "Free-space detection", "Pedestrian detection"],
synthia: ["SYNTHIA daytime urban scenario", "SYNTHIA nighttime driving"]
};
function initSlideshow(id, n) {
slideIndex[id] = 1;
showSlides(1, id);
autoPlay(id);
}
function currentSlide(n, id) {
clearTimeout(autoPlayTimer[id]);
showSlides(slideIndex[id] = n, id);
autoPlay(id);
}
function showSlides(n, id) {
const el = document.getElementById(id + '-slideshow');
if (!el) return;
const slides = el.querySelectorAll('.section-slideshow-image');
const dots = el.querySelectorAll('.slideshow-dot');
const cap = document.getElementById(id + '-caption');
if (n &gt; slides.length) slideIndex[id] = 1;
if (n &lt; 1) slideIndex[id] = slides.length;
slides.forEach(s =&gt; s.classList.remove('active'));
dots.forEach(d =&gt; d.classList.remove('active'));
if (slides.length) {
slides[slideIndex[id] - 1].classList.add('active');
if (dots.length) dots[slideIndex[id] - 1].classList.add('active');
if (cap &amp;&amp; slideCaptions[id]) cap.textContent = slideCaptions[id][slideIndex[id] - 1];
}
}
function autoPlay(id) {
autoPlayTimer[id] = setTimeout(() =&gt; {
slideIndex[id]++;
showSlides(slideIndex[id], id);
autoPlay(id);
}, 5000);
}
function openVideoModal(videoId, title) {
document.getElementById('video-modal-title').textContent = title;
document.getElementById('video-modal-player').src = `https://www.youtube.com/embed/${videoId}`;
document.getElementById('video-modal').classList.add('active');
document.body.style.overflow = 'hidden';
}
function closeVideoModal() {
document.getElementById('video-modal').classList.remove('active');
document.getElementById('video-modal-player').src = '';
document.body.style.overflow = '';
}
document.addEventListener('click', e =&gt; {
if (e.target === document.getElementById('video-modal')) closeVideoModal();
});
document.addEventListener('keydown', e =&gt; { if (e.key === 'Escape') closeVideoModal(); });
&lt;/script&gt;
&lt;div class="elektra-stats"&gt;
&lt;div class="elektra-stat"&gt;&lt;div class="elektra-stat-value"&gt;20+&lt;/div&gt;&lt;div class="elektra-stat-label"&gt;Top-tier Publications&lt;/div&gt;&lt;/div&gt;
&lt;div class="elektra-stat"&gt;&lt;div class="elektra-stat-value"&gt;8&lt;/div&gt;&lt;div class="elektra-stat-label"&gt;Partner Institutions&lt;/div&gt;&lt;/div&gt;
&lt;div class="elektra-stat"&gt;&lt;div class="elektra-stat-value"&gt;400 FPS&lt;/div&gt;&lt;div class="elektra-stat-label"&gt;Real-time Stixel&lt;/div&gt;&lt;/div&gt;
&lt;div class="elektra-stat"&gt;&lt;div class="elektra-stat-value"&gt;2010s&lt;/div&gt;&lt;div class="elektra-stat-label"&gt;Active Period&lt;/div&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;h2 id="autonomous-driving-in-action"&gt;Autonomous Driving in Action&lt;/h2&gt;
&lt;p&gt;Watch the Elektra platform navigate urban roads autonomously — perception, planning, and control integrated end-to-end:&lt;/p&gt;
&lt;p&gt;&lt;button class="featured-video-btn" onclick="openVideoModal('tvZnN65jbCE', 'On-Road Autonomous Driving Demo')"&gt;▶ Watch Autonomous Driving Demo&lt;/button&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="project-overview"&gt;Project Overview&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Elektra&lt;/strong&gt; is an autonomous driving platform and the &lt;strong&gt;Catalan hub of autonomous driving&lt;/strong&gt;, bringing together more than &lt;strong&gt;20 professionals&lt;/strong&gt; from academia and industry. The platform integrates perception, planning, control, and communications to demonstrate production-ready autonomous driving in urban environments.&lt;/p&gt;
&lt;div class="section-row"&gt;
&lt;div class="section-text"&gt;
&lt;p&gt;&lt;strong&gt;Partner institutions:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;CVC-UAB&lt;/strong&gt; — Environment perception &amp;amp; computer vision&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;CAOS-UAB&lt;/strong&gt; — Embedded hardware &amp;amp; GPU optimization&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;UPC-Tarrasa&lt;/strong&gt; — Control &amp;amp; path planning&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;CTTC-UPC&lt;/strong&gt; — Positioning &amp;amp; localization&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;UAB-DEIC&lt;/strong&gt; — Vehicle-to-vehicle communications&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;UAB-CEPHIS&lt;/strong&gt; — Electronics &amp;amp; integration&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;CT Ingenieros&lt;/strong&gt; — Vehicle engineering &amp;amp; drive-by-wire&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Municipality of Sant Quirze&lt;/strong&gt; — Test track facility&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The &lt;strong&gt;Computer Vision Center (CVC)&lt;/strong&gt; led the perception stack — my primary contribution to the project. Validation was performed at the Sant Quirze test track and in urban environments, demonstrating the system across controlled and real-world scenarios.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section-media"&gt;
&lt;div class="section-slideshow" id="project-slideshow"&gt;
&lt;div class="section-slideshow-container"&gt;
&lt;div class="section-slideshow-image active"&gt;
&lt;img src="elektra-car.png" alt="Elektra autonomous vehicle platform"&gt;
&lt;/div&gt;
&lt;div class="section-slideshow-image"&gt;
&lt;img src="overview.png" alt="Project team and institution overview"&gt;
&lt;/div&gt;
&lt;div class="slideshow-nav"&gt;
&lt;button class="slideshow-dot active" onclick="currentSlide(1, 'project')"&gt;&lt;/button&gt;
&lt;button class="slideshow-dot" onclick="currentSlide(2, 'project')"&gt;&lt;/button&gt;
&lt;/div&gt;
&lt;div class="slideshow-caption" id="project-caption"&gt;Elektra autonomous vehicle platform&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;script&gt;initSlideshow('project', 2);&lt;/script&gt;
&lt;hr&gt;
&lt;h2 id="perception-system"&gt;Perception System&lt;/h2&gt;
&lt;p&gt;I &lt;strong&gt;initiated and led the full perception pipeline&lt;/strong&gt; — from raw sensor data to high-level scene understanding. The system fuses multiple modalities for robust environmental awareness:&lt;/p&gt;
&lt;div class="section-row reverse"&gt;
&lt;div class="section-text"&gt;
&lt;p&gt;&lt;strong&gt;Obstacle &amp;amp; Pedestrian Detection&lt;/strong&gt;
Real-time CNN-based detection running at 400+ FPS on GPU hardware, with multi-scale detection for obstacles at various distances and temporal consistency across frames.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Free Space &amp;amp; Lane Detection&lt;/strong&gt;
Stixel-based 3D scene representation identifies drivable areas and lane boundaries from dense stereo depth. Adaptive thresholding handles varying road conditions in real time.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;3D Reconstruction &amp;amp; SLAM&lt;/strong&gt;
Stereo cameras provide dense depth estimation. Visual odometry and loop closure detection enable robust 6-DOF localization even in GPS-denied environments (tunnels, urban canyons).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Sensor Fusion&lt;/strong&gt;
Stereo cameras, monocular vision, LIDAR, and IMU are combined for redundant, accurate scene understanding optimized for embedded automotive hardware.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section-media"&gt;
&lt;div class="section-slideshow" id="perception-slideshow"&gt;
&lt;div class="section-slideshow-container"&gt;
&lt;div class="section-slideshow-image active"&gt;
&lt;img src="image1.png" alt="Real-time stereo vision processing"&gt;
&lt;/div&gt;
&lt;div class="section-slideshow-image"&gt;
&lt;img src="image102.png" alt="3D scene reconstruction"&gt;
&lt;/div&gt;
&lt;div class="section-slideshow-image"&gt;
&lt;img src="image97.png" alt="Free-space detection"&gt;
&lt;/div&gt;
&lt;div class="section-slideshow-image"&gt;
&lt;img src="image104.png" alt="Pedestrian detection"&gt;
&lt;/div&gt;
&lt;div class="slideshow-nav"&gt;
&lt;button class="slideshow-dot active" onclick="currentSlide(1, 'perception')"&gt;&lt;/button&gt;
&lt;button class="slideshow-dot" onclick="currentSlide(2, 'perception')"&gt;&lt;/button&gt;
&lt;button class="slideshow-dot" onclick="currentSlide(3, 'perception')"&gt;&lt;/button&gt;
&lt;button class="slideshow-dot" onclick="currentSlide(4, 'perception')"&gt;&lt;/button&gt;
&lt;/div&gt;
&lt;div class="slideshow-caption" id="perception-caption"&gt;Real-time stereo vision processing&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;script&gt;initSlideshow('perception', 4);&lt;/script&gt;
&lt;hr&gt;
&lt;h2 id="synthia-synthetic-data-for-autonomous-driving"&gt;SYNTHIA: Synthetic Data for Autonomous Driving&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;SYNTHIA&lt;/strong&gt; is a synthetic data generation framework I developed within the Elektra project that creates photorealistic, automatically labeled driving scenarios — addressing the fundamental bottleneck of acquiring large-scale annotated driving data.&lt;/p&gt;
&lt;div class="section-row"&gt;
&lt;div class="section-text"&gt;
&lt;p&gt;&lt;strong&gt;Capabilities:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Multiple environmental conditions: day, night, rain, fog, snow&lt;/li&gt;
&lt;li&gt;Diverse urban scenes: intersections, pedestrian crossings, parked vehicles&lt;/li&gt;
&lt;li&gt;Automatic ground-truth labels for semantic segmentation, depth, and optical flow&lt;/li&gt;
&lt;li&gt;Scalable: thousands of labeled frames in hours&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Impact:&lt;/strong&gt;
SYNTHIA powered the Elektra perception pipeline, reducing the need for expensive field data collection and enabling systematic testing across conditions that are rare or dangerous to capture in the real world. Results were published at CVPR, ICCV, and ECCV. The dataset was licensed to Intel, Audi, and Huawei.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section-media"&gt;
&lt;div class="section-slideshow" id="synthia-slideshow"&gt;
&lt;div class="section-slideshow-container"&gt;
&lt;div class="section-slideshow-image active"&gt;
&lt;img src="synthia-360.png" alt="SYNTHIA daytime urban scenario"&gt;
&lt;/div&gt;
&lt;div class="section-slideshow-image"&gt;
&lt;img src="synthia-overview.png" alt="SYNTHIA multi-condition overview"&gt;
&lt;/div&gt;
&lt;div class="slideshow-nav"&gt;
&lt;button class="slideshow-dot active" onclick="currentSlide(1, 'synthia')"&gt;&lt;/button&gt;
&lt;button class="slideshow-dot" onclick="currentSlide(2, 'synthia')"&gt;&lt;/button&gt;
&lt;/div&gt;
&lt;div class="slideshow-caption" id="synthia-caption"&gt;SYNTHIA daytime urban scenario&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;script&gt;initSlideshow('synthia', 2);&lt;/script&gt;
&lt;hr&gt;
&lt;h2 id="publications--impact"&gt;Publications &amp;amp; Impact&lt;/h2&gt;
&lt;p&gt;The Elektra project generated &lt;strong&gt;20+ peer-reviewed publications&lt;/strong&gt; at top venues including CVPR, ICCV, ECCV, IEEE TITS, and IEEE T-IV. Key contributions include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Stixel-based 3D scene understanding&lt;/strong&gt; — efficient real-time scene representation&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;SYNTHIA dataset&lt;/strong&gt; — synthetic data for autonomous driving, widely used in the community&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Semantic segmentation&lt;/strong&gt; pipelines for urban scene understanding&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Domain adaptation&lt;/strong&gt; methods bridging synthetic and real data&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Legacy:&lt;/strong&gt; Elektra proved vision-centric autonomous driving is achievable in real urban conditions and produced benchmark datasets still used by the research community. Alumni of the team now work at leading autonomous driving companies worldwide.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="selected-videos"&gt;Selected Videos&lt;/h2&gt;
&lt;div class="video-links"&gt;
&lt;button class="video-link-btn" onclick="openVideoModal('tvZnN65jbCE', 'Autonomous Driving Demo')"&gt;▶ Autonomous Driving Demo&lt;/button&gt;
&lt;button class="video-link-btn" onclick="openVideoModal('FWM-5Ps8zFo', 'Elektra Project Overview')"&gt;▶ Project Overview&lt;/button&gt;
&lt;button class="video-link-btn" onclick="openVideoModal('7u-mMtm1Q9o', 'Person Detection')"&gt;▶ Person Detection&lt;/button&gt;
&lt;/div&gt;</description></item><item><title>Incremental Domain Adaptation of Deformable Part-Based Models</title><link>https://david-vazquez.com/publication/xu-2014-incremental/</link><pubDate>Wed, 01 Jan 2014 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/xu-2014-incremental/</guid><description/></item><item><title>Learning a Part-Based Pedestrian Detector in Virtual World</title><link>https://david-vazquez.com/publication/xu-2014-learning/</link><pubDate>Wed, 01 Jan 2014 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/xu-2014-learning/</guid><description/></item><item><title>Virtual and Real World Adaptation for Pedestrian Detection</title><link>https://david-vazquez.com/publication/vazquez-2014-virtual/</link><pubDate>Wed, 01 Jan 2014 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/vazquez-2014-virtual/</guid><description/></item><item><title>Adapting a Pedestrian Detector by Boosting LDA Exemplar Classifiers</title><link>https://david-vazquez.com/publication/xu-2013-adapting/</link><pubDate>Tue, 01 Jan 2013 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/xu-2013-adapting/</guid><description/></item><item><title>Adapting Pedestrian Detection from Synthetic to Far Infrared Images</title><link>https://david-vazquez.com/publication/socarras-2013-adapting/</link><pubDate>Tue, 01 Jan 2013 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/socarras-2013-adapting/</guid><description/></item><item><title>Computer Vision Trends and Challenges</title><link>https://david-vazquez.com/publication/bernal-2013-computer/</link><pubDate>Tue, 01 Jan 2013 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/bernal-2013-computer/</guid><description/></item><item><title>Domain Adaptation of Virtual and Real Worlds for Pedestrian Detection</title><link>https://david-vazquez.com/publication/vazquez-2013-domain/</link><pubDate>Tue, 01 Jan 2013 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/vazquez-2013-domain/</guid><description/></item><item><title>Interactive Training of Human Detectors</title><link>https://david-vazquez.com/publication/vazquez-2013-interactive/</link><pubDate>Tue, 01 Jan 2013 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/vazquez-2013-interactive/</guid><description/></item><item><title>Learning a Multiview Part-Based Model in Virtual World for Pedestrian Detection</title><link>https://david-vazquez.com/publication/xu-2013-learning/</link><pubDate>Tue, 01 Jan 2013 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/xu-2013-learning/</guid><description/></item><item><title>Multi-Task Bilinear Classifiers for Visual Domain Adaptation</title><link>https://david-vazquez.com/publication/xu-2013-multi/</link><pubDate>Tue, 01 Jan 2013 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/xu-2013-multi/</guid><description/></item><item><title>Occlusion Handling via Random Subspace Classifiers for Human Detection</title><link>https://david-vazquez.com/publication/marn-2013-occlusion/</link><pubDate>Tue, 01 Jan 2013 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/marn-2013-occlusion/</guid><description/></item><item><title>Random Forests of Local Experts for Pedestrian Detection</title><link>https://david-vazquez.com/publication/marin-2013-randomforests/</link><pubDate>Tue, 01 Jan 2013 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/marin-2013-randomforests/</guid><description/></item><item><title>Weakly Supervised Automatic Annotation of Pedestrian Bounding Boxes</title><link>https://david-vazquez.com/publication/vazquez-2013-weakly/</link><pubDate>Tue, 01 Jan 2013 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/vazquez-2013-weakly/</guid><description/></item><item><title>Improving HOG with Image Segmentation: Application to Human Detection</title><link>https://david-vazquez.com/publication/socarras-2012-improving/</link><pubDate>Sun, 01 Jan 2012 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/socarras-2012-improving/</guid><description/></item><item><title>Pedestrian Detection: Exploring Virtual Worlds</title><link>https://david-vazquez.com/publication/marn-2012-pedestrian/</link><pubDate>Sun, 01 Jan 2012 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/marn-2012-pedestrian/</guid><description/></item><item><title>Unsupervised Domain Adaptation of Virtual and Real Worlds for Pedestrian Detection</title><link>https://david-vazquez.com/publication/vazquez-2012-unsupervised/</link><pubDate>Sun, 01 Jan 2012 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/vazquez-2012-unsupervised/</guid><description/></item><item><title>Color Contribution to Part-Based Person Detection in Different Types of Scenarios</title><link>https://david-vazquez.com/publication/rao-2011-color/</link><pubDate>Sat, 01 Jan 2011 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/rao-2011-color/</guid><description/></item><item><title>Cool World: Domain Adaptation of Virtual and Real Worlds for Human Detection using Active Learning</title><link>https://david-vazquez.com/publication/vazquez-2011-cool/</link><pubDate>Sat, 01 Jan 2011 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/vazquez-2011-cool/</guid><description/></item><item><title>Opponent Colors for Human Detection</title><link>https://david-vazquez.com/publication/anwer-2011-opponent/</link><pubDate>Sat, 01 Jan 2011 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/anwer-2011-opponent/</guid><description/></item><item><title>Virtual Worlds and Active Learning for Human Detection</title><link>https://david-vazquez.com/publication/vazquez-2011-virtual/</link><pubDate>Sat, 01 Jan 2011 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/vazquez-2011-virtual/</guid><description/></item><item><title>Detecting Small Pedestrians</title><link>https://david-vazquez.com/publication/vazquez-2010-detecting/</link><pubDate>Fri, 01 Jan 2010 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/vazquez-2010-detecting/</guid><description/></item><item><title>Learning Appearance in Virtual Scenarios for Pedestrian Detection</title><link>https://david-vazquez.com/publication/marin-2010-learning/</link><pubDate>Fri, 01 Jan 2010 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/publication/marin-2010-learning/</guid><description/></item><item><title>Talks &amp; Panels</title><link>https://david-vazquez.com/talks/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/talks/</guid><description>&lt;h2 id="invited-talks"&gt;Invited Talks&lt;/h2&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Talk&lt;/th&gt;
&lt;th&gt;Venue&lt;/th&gt;
&lt;th&gt;Date&lt;/th&gt;
&lt;th&gt;Location&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Generative Models in Computer Vision&lt;/td&gt;
&lt;td&gt;Georgian&lt;/td&gt;
&lt;td&gt;May 2021&lt;/td&gt;
&lt;td&gt;Toronto, Canada&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Low Data Learning in Computer Vision&lt;/td&gt;
&lt;td&gt;USC&lt;/td&gt;
&lt;td&gt;Apr 2021&lt;/td&gt;
&lt;td&gt;Santiago, Spain&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AI 101: Introduction to AI&lt;/td&gt;
&lt;td&gt;Salesforce&lt;/td&gt;
&lt;td&gt;Jun 2020&lt;/td&gt;
&lt;td&gt;Montreal, Canada&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GCNN for Compatibility Prediction&lt;/td&gt;
&lt;td&gt;ETS&lt;/td&gt;
&lt;td&gt;Apr 2019&lt;/td&gt;
&lt;td&gt;Montreal, Canada&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;The SYNTHIA Dataset&lt;/td&gt;
&lt;td&gt;CVPR Traffic Surveillance Workshop&lt;/td&gt;
&lt;td&gt;Jul 2017&lt;/td&gt;
&lt;td&gt;Honolulu, USA&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Self Driving and Deep Learning&lt;/td&gt;
&lt;td&gt;AI for Data Mining and Big Data&lt;/td&gt;
&lt;td&gt;Apr 2017&lt;/td&gt;
&lt;td&gt;Spain&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;The SYNTHIA Dataset&lt;/td&gt;
&lt;td&gt;Zoox&lt;/td&gt;
&lt;td&gt;Jul 2016&lt;/td&gt;
&lt;td&gt;San Francisco, USA&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Learning to See in a Virtual World&lt;/td&gt;
&lt;td&gt;Alcalá University&lt;/td&gt;
&lt;td&gt;Nov 2015&lt;/td&gt;
&lt;td&gt;Madrid, Spain&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Autonomous Vehicles&lt;/td&gt;
&lt;td&gt;Pint of Science&lt;/td&gt;
&lt;td&gt;Apr 2015&lt;/td&gt;
&lt;td&gt;Barcelona, Spain&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ADAS and Autonomous Vehicles&lt;/td&gt;
&lt;td&gt;ST Dynamics&lt;/td&gt;
&lt;td&gt;Nov 2014&lt;/td&gt;
&lt;td&gt;Singapore&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pedestrian Detection&lt;/td&gt;
&lt;td&gt;Samsung Research&lt;/td&gt;
&lt;td&gt;Jun 2014&lt;/td&gt;
&lt;td&gt;Poland&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Domain Adaptation for Pedestrian Detection&lt;/td&gt;
&lt;td&gt;Daimler AG&lt;/td&gt;
&lt;td&gt;Oct 2013&lt;/td&gt;
&lt;td&gt;Germany&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="demos"&gt;Demos&lt;/h2&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Demo&lt;/th&gt;
&lt;th&gt;Venue&lt;/th&gt;
&lt;th&gt;Year&lt;/th&gt;
&lt;th&gt;Location&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Elektra Autonomous Vehicle&lt;/td&gt;
&lt;td&gt;UAB for NVIDIA&lt;/td&gt;
&lt;td&gt;2016&lt;/td&gt;
&lt;td&gt;Barcelona, Spain&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Elektra Autonomous Vehicle&lt;/td&gt;
&lt;td&gt;Catalan Government&lt;/td&gt;
&lt;td&gt;2016&lt;/td&gt;
&lt;td&gt;Barcelona, Spain&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Autonomous Vehicle Simulator&lt;/td&gt;
&lt;td&gt;CVPR&lt;/td&gt;
&lt;td&gt;2016&lt;/td&gt;
&lt;td&gt;Las Vegas, USA&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3D Pedestrian Detection&lt;/td&gt;
&lt;td&gt;ECCV&lt;/td&gt;
&lt;td&gt;2015&lt;/td&gt;
&lt;td&gt;Barcelona, Spain&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="media-coverage"&gt;Media Coverage&lt;/h2&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Topic&lt;/th&gt;
&lt;th&gt;Outlet&lt;/th&gt;
&lt;th&gt;Year&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Tech Can Help Reduce Human Error&lt;/td&gt;
&lt;td&gt;TV3 (Televisión de Catalunya)&lt;/td&gt;
&lt;td&gt;2013&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Eco Driver Project&lt;/td&gt;
&lt;td&gt;BTV (Barcelona TV)&lt;/td&gt;
&lt;td&gt;2013&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Eco Driver Project&lt;/td&gt;
&lt;td&gt;ETB (Euskal Telebista)&lt;/td&gt;
&lt;td&gt;2013&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Face Recognition at Madrid Barajas Airport&lt;/td&gt;
&lt;td&gt;Antena3&lt;/td&gt;
&lt;td&gt;2006&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;</description></item><item><title>Teaching</title><link>https://david-vazquez.com/teaching/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/teaching/</guid><description/></item><item><title>Team</title><link>https://david-vazquez.com/team/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/team/</guid><description>&lt;h2 id="current-phd-interns-servicenow-research-2024-to-2025"&gt;Current PhD Interns (ServiceNow Research, 2024 to 2025)&lt;/h2&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Name&lt;/th&gt;
&lt;th&gt;University&lt;/th&gt;
&lt;th&gt;Supervisor&lt;/th&gt;
&lt;th&gt;Topic&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Zichao Li&lt;/td&gt;
&lt;td&gt;McGill&lt;/td&gt;
&lt;td&gt;Siva Reddy&lt;/td&gt;
&lt;td&gt;Multimodal Learning&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tianyu Zhang&lt;/td&gt;
&lt;td&gt;UdeM&lt;/td&gt;
&lt;td&gt;Yoshua Bengio&lt;/td&gt;
&lt;td&gt;Multimodal Learning&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Suyuchen Wang&lt;/td&gt;
&lt;td&gt;UdeM&lt;/td&gt;
&lt;td&gt;Bang Liu&lt;/td&gt;
&lt;td&gt;Multimodal Learning&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Rabiul Awal&lt;/td&gt;
&lt;td&gt;UdeM&lt;/td&gt;
&lt;td&gt;A. Agrawal&lt;/td&gt;
&lt;td&gt;Multimodal Learning&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Xiangru Jian&lt;/td&gt;
&lt;td&gt;U. Waterloo&lt;/td&gt;
&lt;td&gt;Tamer Özsu&lt;/td&gt;
&lt;td&gt;Multimodal Learning&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Mahsa Massoud&lt;/td&gt;
&lt;td&gt;McGill&lt;/td&gt;
&lt;td&gt;S. Ravanbakhsh&lt;/td&gt;
&lt;td&gt;Multimodal Learning&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ahmed Masry&lt;/td&gt;
&lt;td&gt;U. York&lt;/td&gt;
&lt;td&gt;Enamul Hoque&lt;/td&gt;
&lt;td&gt;Multimodal Learning&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;A. Abaskohi&lt;/td&gt;
&lt;td&gt;UBC&lt;/td&gt;
&lt;td&gt;G. Carenini&lt;/td&gt;
&lt;td&gt;Data Analytics&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Léo Boisvert&lt;/td&gt;
&lt;td&gt;PolyMtl&lt;/td&gt;
&lt;td&gt;Quentin Cappart&lt;/td&gt;
&lt;td&gt;Web Agents&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="phd-thesis-supervision"&gt;PhD Thesis Supervision&lt;/h2&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Name&lt;/th&gt;
&lt;th&gt;University&lt;/th&gt;
&lt;th&gt;Years&lt;/th&gt;
&lt;th&gt;Topic&lt;/th&gt;
&lt;th&gt;Current Position&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Daniel H. Juarez&lt;/td&gt;
&lt;td&gt;UAB&lt;/td&gt;
&lt;td&gt;2015 to 2020&lt;/td&gt;
&lt;td&gt;CUDA 3D Perception&lt;/td&gt;
&lt;td&gt;ML Compiler Engineer, AMD&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Zhijie Fang&lt;/td&gt;
&lt;td&gt;UAB&lt;/td&gt;
&lt;td&gt;2014 to 2017&lt;/td&gt;
&lt;td&gt;Pedestrian Intention&lt;/td&gt;
&lt;td&gt;Professor, NUS&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Alejandro González&lt;/td&gt;
&lt;td&gt;UAB&lt;/td&gt;
&lt;td&gt;2013 to 2015&lt;/td&gt;
&lt;td&gt;Pedestrian Detection&lt;/td&gt;
&lt;td&gt;Professor, La Salle&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="selected-intern-alumni"&gt;Selected Intern Alumni&lt;/h2&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Name&lt;/th&gt;
&lt;th&gt;University&lt;/th&gt;
&lt;th&gt;Years&lt;/th&gt;
&lt;th&gt;Topic&lt;/th&gt;
&lt;th&gt;Current Position&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Juan A. Rodriguez&lt;/td&gt;
&lt;td&gt;UPF/Mila&lt;/td&gt;
&lt;td&gt;2021 to 2022&lt;/td&gt;
&lt;td&gt;Figure Generation&lt;/td&gt;
&lt;td&gt;Co-Founder &amp;amp; CEO, QuiverAI&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Oscar Mañas&lt;/td&gt;
&lt;td&gt;UAB&lt;/td&gt;
&lt;td&gt;2020 to 2021&lt;/td&gt;
&lt;td&gt;Remote Sensing&lt;/td&gt;
&lt;td&gt;PhD UdeM, Meta AI&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Bahare Fatemi&lt;/td&gt;
&lt;td&gt;UBC&lt;/td&gt;
&lt;td&gt;2019 to 2023&lt;/td&gt;
&lt;td&gt;Knowledge Graphs&lt;/td&gt;
&lt;td&gt;Research Scientist, Google&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sai Rajeswar&lt;/td&gt;
&lt;td&gt;UdeM&lt;/td&gt;
&lt;td&gt;2020 to 2023&lt;/td&gt;
&lt;td&gt;Image Generation&lt;/td&gt;
&lt;td&gt;Staff Research Scientist, ServiceNow&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pau Rodríguez&lt;/td&gt;
&lt;td&gt;UAB&lt;/td&gt;
&lt;td&gt;2020 to 2021&lt;/td&gt;
&lt;td&gt;Low Data Learning&lt;/td&gt;
&lt;td&gt;Research Scientist, Apple&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Guillem Cucurull&lt;/td&gt;
&lt;td&gt;UAB&lt;/td&gt;
&lt;td&gt;2018 to 2019&lt;/td&gt;
&lt;td&gt;Graph Neural Nets&lt;/td&gt;
&lt;td&gt;Research Scientist, Meta AI&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Issam H. Laradji&lt;/td&gt;
&lt;td&gt;UBC&lt;/td&gt;
&lt;td&gt;2017 to 2020&lt;/td&gt;
&lt;td&gt;Low Data Learning&lt;/td&gt;
&lt;td&gt;Sr Staff Research Scientist, ServiceNow&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Massimo Caccia&lt;/td&gt;
&lt;td&gt;HEC&lt;/td&gt;
&lt;td&gt;2020 to 2022&lt;/td&gt;
&lt;td&gt;Continual Learning&lt;/td&gt;
&lt;td&gt;Research Scientist, ServiceNow&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Shengchao Liu&lt;/td&gt;
&lt;td&gt;UdeM&lt;/td&gt;
&lt;td&gt;2022 to 2023&lt;/td&gt;
&lt;td&gt;Relational Databases&lt;/td&gt;
&lt;td&gt;Postdoc, UC Berkeley&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;J. Materzynska&lt;/td&gt;
&lt;td&gt;UAB&lt;/td&gt;
&lt;td&gt;2015 to 2016&lt;/td&gt;
&lt;td&gt;Data Generation&lt;/td&gt;
&lt;td&gt;PhD student, MIT&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Eugenio Alcalá&lt;/td&gt;
&lt;td&gt;UPC&lt;/td&gt;
&lt;td&gt;2015 to 2016&lt;/td&gt;
&lt;td&gt;Control and Planning&lt;/td&gt;
&lt;td&gt;CEO, SeaX AI&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sebastian Ramos&lt;/td&gt;
&lt;td&gt;UAB&lt;/td&gt;
&lt;td&gt;2012 to 2014&lt;/td&gt;
&lt;td&gt;Scene Understanding&lt;/td&gt;
&lt;td&gt;CEO, Tensoreye&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="msc-thesis-supervision"&gt;MSc Thesis Supervision&lt;/h2&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Name&lt;/th&gt;
&lt;th&gt;University&lt;/th&gt;
&lt;th&gt;Years&lt;/th&gt;
&lt;th&gt;Topic&lt;/th&gt;
&lt;th&gt;Current Position&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Marco Terral&lt;/td&gt;
&lt;td&gt;UAB&lt;/td&gt;
&lt;td&gt;2024 to 2025&lt;/td&gt;
&lt;td&gt;SVG Generation&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Jia Quian&lt;/td&gt;
&lt;td&gt;UPC&lt;/td&gt;
&lt;td&gt;2022 to 2023&lt;/td&gt;
&lt;td&gt;LLM Robot Control&lt;/td&gt;
&lt;td&gt;Founder, Theker&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Axel Barroso&lt;/td&gt;
&lt;td&gt;UAB&lt;/td&gt;
&lt;td&gt;2016 to 2017&lt;/td&gt;
&lt;td&gt;Keypoint Detection&lt;/td&gt;
&lt;td&gt;RS, Niantic&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gabriel Villalonga&lt;/td&gt;
&lt;td&gt;UAB&lt;/td&gt;
&lt;td&gt;2014 to 2015&lt;/td&gt;
&lt;td&gt;3D Mapping&lt;/td&gt;
&lt;td&gt;RS, CVC&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;</description></item></channel></rss>