<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Projects | David Vázquez</title><link>https://david-vazquez.com/project/</link><atom:link href="https://david-vazquez.com/project/index.xml" rel="self" type="application/rss+xml"/><description>Projects</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Sat, 01 Nov 2025 00:00:00 +0000</lastBuildDate><item><title>AI Tools for Indigenous Languages</title><link>https://david-vazquez.com/project/indigenous-languages/</link><pubDate>Sat, 01 Nov 2025 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/project/indigenous-languages/</guid><description>&lt;p&gt;An NSERC Discovery Grant funded project developing multimodal AI tools for underrepresented languages, with a focus on the Matsigenka language of Peru and Inuktitut in northern Canada. The project follows OCAP and TCPS 2 data sovereignty principles and involves community partners including Tejiendo Puentes en Salud, Ayni Desarrollo, Heritage Lab, and CECONAMA. Conducted through Polytechnique Montréal in collaboration with MILA.&lt;/p&gt;</description></item><item><title>EnterpriseOps-Gym</title><link>https://david-vazquez.com/project/enterpriseops-gym/</link><pubDate>Tue, 01 Apr 2025 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/project/enterpriseops-gym/</guid><description>&lt;p&gt;EnterpriseOps-Gym features 1,150 expert designed tasks across 8 interconnected enterprise domains, with persistent state, strict verification logic, and policy aware execution requirements. It tests whether AI agents can handle domain expertise, not just general reasoning.&lt;/p&gt;</description></item><item><title>Apriel Model Family</title><link>https://david-vazquez.com/project/apriel/</link><pubDate>Sat, 01 Mar 2025 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/project/apriel/</guid><description>&lt;p&gt;The Apriel family of open language models developed at ServiceNow Research, including base models (Apriel 1.5, 1.6), reasoning models (AprielReasoner), and safety models (AprielGuard, an 8B parameter guardian model).&lt;/p&gt;</description></item><item><title>BigDocs</title><link>https://david-vazquez.com/project/bigdocs/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/project/bigdocs/</guid><description>&lt;p&gt;BigDocs is a large scale, open, and permissively licensed dataset for training multimodal models on document understanding and code generation tasks. Published at ICLR 2025.&lt;/p&gt;</description></item><item><title>WorkArena and BrowserGym</title><link>https://david-vazquez.com/project/workarena/</link><pubDate>Mon, 01 Jul 2024 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/project/workarena/</guid><description>&lt;p&gt;WorkArena is a benchmark of tasks based on the ServiceNow platform that measures how well web agents can perform common knowledge work. BrowserGym provides a rich environment for designing and evaluating such agents with multimodal observations and a comprehensive action set. Published at ICML 2024.&lt;/p&gt;</description></item><item><title>SYNTHIA</title><link>https://david-vazquez.com/project/synthia/</link><pubDate>Wed, 01 Jun 2016 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/project/synthia/</guid><description>&lt;p&gt;SYNTHIA is a large collection of synthetic images for semantic segmentation of urban scenes, generated using a video game engine. Published at CVPR 2016 and widely adopted in the autonomous driving research community. Licensed for commercial use by Intel, Audi, Huawei, Toyota, and Samsung.&lt;/p&gt;</description></item><item><title>Elektra Autonomous Vehicle</title><link>https://david-vazquez.com/project/elektra/</link><pubDate>Wed, 01 Jan 2014 00:00:00 +0000</pubDate><guid>https://david-vazquez.com/project/elektra/</guid><description>&lt;style&gt;
.section-row {
display: flex;
flex-wrap: wrap;
gap: 2.5rem;
margin: 2.5rem 0;
align-items: flex-start;
}
.section-row.reverse { flex-direction: row-reverse; }
.section-text { flex: 1; min-width: 300px; }
.section-media { flex: 0 0 calc(45% - 1.25rem); }
@media (max-width: 1024px) {
.section-row, .section-row.reverse { flex-direction: column; }
.section-media { flex: 1 1 100%; }
}
.section-slideshow {
position: relative;
border-radius: 8px;
overflow: hidden;
background: #f5f5f5;
}
.section-slideshow-container {
position: relative;
width: 100%;
padding-bottom: 75%;
height: 0;
}
.section-slideshow-image {
position: absolute;
inset: 0;
opacity: 0;
transition: opacity 0.6s ease-in-out;
}
.section-slideshow-image.active { opacity: 1; }
.section-slideshow-image img {
width: 100%;
height: 100%;
object-fit: cover;
display: block;
}
.slideshow-nav {
position: absolute;
bottom: 1rem;
left: 50%;
transform: translateX(-50%);
display: flex;
gap: 0.5rem;
z-index: 10;
}
.slideshow-dot {
width: 10px;
height: 10px;
border-radius: 50%;
background: rgba(255,255,255,0.5);
cursor: pointer;
border: none;
transition: background 0.3s;
}
.slideshow-dot.active { background: white; }
.slideshow-caption {
position: absolute;
bottom: 2.5rem;
left: 0; right: 0;
padding: 0.5rem 1rem;
background: rgba(0,0,0,0.45);
color: white;
font-size: 0.85rem;
text-align: center;
z-index: 5;
}
.elektra-stats {
display: grid;
grid-template-columns: repeat(4, 1fr);
gap: 1.5rem;
margin: 2rem 0;
text-align: center;
}
.elektra-stat {
padding: 1.25rem;
background: #f5f5f5;
border-radius: 8px;
}
.dark .elektra-stat { background: #2a2a2a; }
.elektra-stat-value { font-size: 1.6rem; font-weight: 700; color: #333; }
.dark .elektra-stat-value { color: #eee; }
.elektra-stat-label { font-size: 0.85rem; color: #666; margin-top: 0.4rem; }
.dark .elektra-stat-label { color: #bbb; }
@media (max-width: 640px) { .elektra-stats { grid-template-columns: repeat(2, 1fr); } }
.featured-video-btn {
display: inline-flex;
align-items: center;
gap: 0.5rem;
padding: 0.875rem 1.75rem;
background: rgb(var(--color-primary-600));
color: white;
border-radius: 8px;
cursor: pointer;
font-size: 1rem;
font-weight: 600;
border: none;
margin: 1.25rem 0;
transition: background 0.2s;
}
.featured-video-btn:hover { background: rgb(var(--color-primary-700)); }
.video-links {
display: flex;
flex-wrap: wrap;
gap: 0.75rem;
margin-top: 1.5rem;
}
.video-link-btn {
display: inline-flex;
align-items: center;
gap: 0.4rem;
padding: 0.4rem 0.9rem;
border: 1px solid rgba(0,0,0,0.15);
border-radius: 6px;
font-size: 0.85rem;
font-weight: 500;
cursor: pointer;
background: none;
color: inherit;
font-family: inherit;
transition: border-color 0.2s, background 0.2s;
}
.video-link-btn:hover { border-color: rgb(var(--color-primary-500)); background: rgba(var(--color-primary-50), 0.5); }
.dark .video-link-btn { border-color: rgba(255,255,255,0.15); }
.dark .video-link-btn:hover { border-color: rgb(var(--color-primary-400)); background: rgba(255,255,255,0.05); }
.video-modal {
display: none;
position: fixed;
inset: 0;
background: rgba(0,0,0,0.8);
z-index: 1000;
align-items: center;
justify-content: center;
padding: 2rem;
}
.video-modal.active { display: flex; }
.video-modal-content {
background: white;
border-radius: 12px;
max-width: 900px;
width: 100%;
overflow: hidden;
box-shadow: 0 10px 40px rgba(0,0,0,0.3);
}
.dark .video-modal-content { background: #1a1a1a; }
.video-modal-header {
display: flex;
justify-content: space-between;
align-items: center;
padding: 1rem;
border-bottom: 1px solid #e5e5e5;
}
.dark .video-modal-header { border-bottom-color: #333; }
.video-modal-title { font-size: 1rem; font-weight: 600; margin: 0; }
.video-modal-close {
background: none;
border: none;
font-size: 1.4rem;
cursor: pointer;
color: #666;
line-height: 1;
padding: 0.25rem 0.5rem;
}
.dark .video-modal-close { color: #aaa; }
.video-modal-player {
position: relative;
padding-bottom: 56.25%;
height: 0;
}
.video-modal-player iframe {
position: absolute;
inset: 0;
width: 100%;
height: 100%;
}
&lt;/style&gt;
&lt;div id="video-modal" class="video-modal"&gt;
&lt;div class="video-modal-content"&gt;
&lt;div class="video-modal-header"&gt;
&lt;h3 class="video-modal-title" id="video-modal-title"&gt;Video&lt;/h3&gt;
&lt;button class="video-modal-close" onclick="closeVideoModal()"&gt;✕&lt;/button&gt;
&lt;/div&gt;
&lt;div class="video-modal-player"&gt;
&lt;iframe id="video-modal-player" src="" frameborder="0" allowfullscreen allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"&gt;&lt;/iframe&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;script&gt;
let slideIndex = {};
let autoPlayTimer = {};
const slideCaptions = {
project: ["Elektra autonomous vehicle platform", "Multidisciplinary team composition"],
perception: ["Real-time stereo vision processing", "3D scene reconstruction", "Free-space detection", "Pedestrian detection"],
synthia: ["SYNTHIA daytime urban scenario", "SYNTHIA nighttime driving"]
};
function initSlideshow(id, n) {
slideIndex[id] = 1;
showSlides(1, id);
autoPlay(id);
}
function currentSlide(n, id) {
clearTimeout(autoPlayTimer[id]);
showSlides(slideIndex[id] = n, id);
autoPlay(id);
}
function showSlides(n, id) {
const el = document.getElementById(id + '-slideshow');
if (!el) return;
const slides = el.querySelectorAll('.section-slideshow-image');
const dots = el.querySelectorAll('.slideshow-dot');
const cap = document.getElementById(id + '-caption');
if (n &gt; slides.length) slideIndex[id] = 1;
if (n &lt; 1) slideIndex[id] = slides.length;
slides.forEach(s =&gt; s.classList.remove('active'));
dots.forEach(d =&gt; d.classList.remove('active'));
if (slides.length) {
slides[slideIndex[id] - 1].classList.add('active');
if (dots.length) dots[slideIndex[id] - 1].classList.add('active');
if (cap &amp;&amp; slideCaptions[id]) cap.textContent = slideCaptions[id][slideIndex[id] - 1];
}
}
function autoPlay(id) {
autoPlayTimer[id] = setTimeout(() =&gt; {
slideIndex[id]++;
showSlides(slideIndex[id], id);
autoPlay(id);
}, 5000);
}
function openVideoModal(videoId, title) {
document.getElementById('video-modal-title').textContent = title;
document.getElementById('video-modal-player').src = `https://www.youtube.com/embed/${videoId}`;
document.getElementById('video-modal').classList.add('active');
document.body.style.overflow = 'hidden';
}
function closeVideoModal() {
document.getElementById('video-modal').classList.remove('active');
document.getElementById('video-modal-player').src = '';
document.body.style.overflow = '';
}
document.addEventListener('click', e =&gt; {
if (e.target === document.getElementById('video-modal')) closeVideoModal();
});
document.addEventListener('keydown', e =&gt; { if (e.key === 'Escape') closeVideoModal(); });
&lt;/script&gt;
&lt;div class="elektra-stats"&gt;
&lt;div class="elektra-stat"&gt;&lt;div class="elektra-stat-value"&gt;20+&lt;/div&gt;&lt;div class="elektra-stat-label"&gt;Top-tier Publications&lt;/div&gt;&lt;/div&gt;
&lt;div class="elektra-stat"&gt;&lt;div class="elektra-stat-value"&gt;8&lt;/div&gt;&lt;div class="elektra-stat-label"&gt;Partner Institutions&lt;/div&gt;&lt;/div&gt;
&lt;div class="elektra-stat"&gt;&lt;div class="elektra-stat-value"&gt;400 FPS&lt;/div&gt;&lt;div class="elektra-stat-label"&gt;Real-time Stixel&lt;/div&gt;&lt;/div&gt;
&lt;div class="elektra-stat"&gt;&lt;div class="elektra-stat-value"&gt;2010s&lt;/div&gt;&lt;div class="elektra-stat-label"&gt;Active Period&lt;/div&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;h2 id="autonomous-driving-in-action"&gt;Autonomous Driving in Action&lt;/h2&gt;
&lt;p&gt;Watch the Elektra platform navigate urban roads autonomously — perception, planning, and control integrated end-to-end:&lt;/p&gt;
&lt;p&gt;&lt;button class="featured-video-btn" onclick="openVideoModal('tvZnN65jbCE', 'On-Road Autonomous Driving Demo')"&gt;▶ Watch Autonomous Driving Demo&lt;/button&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="project-overview"&gt;Project Overview&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Elektra&lt;/strong&gt; is an autonomous driving platform and the &lt;strong&gt;Catalan hub of autonomous driving&lt;/strong&gt;, bringing together more than &lt;strong&gt;20 professionals&lt;/strong&gt; from academia and industry. The platform integrates perception, planning, control, and communications to demonstrate production-ready autonomous driving in urban environments.&lt;/p&gt;
&lt;div class="section-row"&gt;
&lt;div class="section-text"&gt;
&lt;p&gt;&lt;strong&gt;Partner institutions:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;CVC-UAB&lt;/strong&gt; — Environment perception &amp;amp; computer vision&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;CAOS-UAB&lt;/strong&gt; — Embedded hardware &amp;amp; GPU optimization&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;UPC-Tarrasa&lt;/strong&gt; — Control &amp;amp; path planning&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;CTTC-UPC&lt;/strong&gt; — Positioning &amp;amp; localization&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;UAB-DEIC&lt;/strong&gt; — Vehicle-to-vehicle communications&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;UAB-CEPHIS&lt;/strong&gt; — Electronics &amp;amp; integration&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;CT Ingenieros&lt;/strong&gt; — Vehicle engineering &amp;amp; drive-by-wire&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Municipality of Sant Quirze&lt;/strong&gt; — Test track facility&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The &lt;strong&gt;Computer Vision Center (CVC)&lt;/strong&gt; led the perception stack — my primary contribution to the project. Validation was performed at the Sant Quirze test track and in urban environments, demonstrating the system across controlled and real-world scenarios.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section-media"&gt;
&lt;div class="section-slideshow" id="project-slideshow"&gt;
&lt;div class="section-slideshow-container"&gt;
&lt;div class="section-slideshow-image active"&gt;
&lt;img src="elektra-car.png" alt="Elektra autonomous vehicle platform"&gt;
&lt;/div&gt;
&lt;div class="section-slideshow-image"&gt;
&lt;img src="overview.png" alt="Project team and institution overview"&gt;
&lt;/div&gt;
&lt;div class="slideshow-nav"&gt;
&lt;button class="slideshow-dot active" onclick="currentSlide(1, 'project')"&gt;&lt;/button&gt;
&lt;button class="slideshow-dot" onclick="currentSlide(2, 'project')"&gt;&lt;/button&gt;
&lt;/div&gt;
&lt;div class="slideshow-caption" id="project-caption"&gt;Elektra autonomous vehicle platform&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;script&gt;initSlideshow('project', 2);&lt;/script&gt;
&lt;hr&gt;
&lt;h2 id="perception-system"&gt;Perception System&lt;/h2&gt;
&lt;p&gt;I &lt;strong&gt;initiated and led the full perception pipeline&lt;/strong&gt; — from raw sensor data to high-level scene understanding. The system fuses multiple modalities for robust environmental awareness:&lt;/p&gt;
&lt;div class="section-row reverse"&gt;
&lt;div class="section-text"&gt;
&lt;p&gt;&lt;strong&gt;Obstacle &amp;amp; Pedestrian Detection&lt;/strong&gt;
Real-time CNN-based detection running at 400+ FPS on GPU hardware, with multi-scale detection for obstacles at various distances and temporal consistency across frames.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Free Space &amp;amp; Lane Detection&lt;/strong&gt;
Stixel-based 3D scene representation identifies drivable areas and lane boundaries from dense stereo depth. Adaptive thresholding handles varying road conditions in real time.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;3D Reconstruction &amp;amp; SLAM&lt;/strong&gt;
Stereo cameras provide dense depth estimation. Visual odometry and loop closure detection enable robust 6-DOF localization even in GPS-denied environments (tunnels, urban canyons).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Sensor Fusion&lt;/strong&gt;
Stereo cameras, monocular vision, LIDAR, and IMU are combined for redundant, accurate scene understanding optimized for embedded automotive hardware.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section-media"&gt;
&lt;div class="section-slideshow" id="perception-slideshow"&gt;
&lt;div class="section-slideshow-container"&gt;
&lt;div class="section-slideshow-image active"&gt;
&lt;img src="image1.png" alt="Real-time stereo vision processing"&gt;
&lt;/div&gt;
&lt;div class="section-slideshow-image"&gt;
&lt;img src="image102.png" alt="3D scene reconstruction"&gt;
&lt;/div&gt;
&lt;div class="section-slideshow-image"&gt;
&lt;img src="image97.png" alt="Free-space detection"&gt;
&lt;/div&gt;
&lt;div class="section-slideshow-image"&gt;
&lt;img src="image104.png" alt="Pedestrian detection"&gt;
&lt;/div&gt;
&lt;div class="slideshow-nav"&gt;
&lt;button class="slideshow-dot active" onclick="currentSlide(1, 'perception')"&gt;&lt;/button&gt;
&lt;button class="slideshow-dot" onclick="currentSlide(2, 'perception')"&gt;&lt;/button&gt;
&lt;button class="slideshow-dot" onclick="currentSlide(3, 'perception')"&gt;&lt;/button&gt;
&lt;button class="slideshow-dot" onclick="currentSlide(4, 'perception')"&gt;&lt;/button&gt;
&lt;/div&gt;
&lt;div class="slideshow-caption" id="perception-caption"&gt;Real-time stereo vision processing&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;script&gt;initSlideshow('perception', 4);&lt;/script&gt;
&lt;hr&gt;
&lt;h2 id="synthia-synthetic-data-for-autonomous-driving"&gt;SYNTHIA: Synthetic Data for Autonomous Driving&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;SYNTHIA&lt;/strong&gt; is a synthetic data generation framework I developed within the Elektra project that creates photorealistic, automatically labeled driving scenarios — addressing the fundamental bottleneck of acquiring large-scale annotated driving data.&lt;/p&gt;
&lt;div class="section-row"&gt;
&lt;div class="section-text"&gt;
&lt;p&gt;&lt;strong&gt;Capabilities:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Multiple environmental conditions: day, night, rain, fog, snow&lt;/li&gt;
&lt;li&gt;Diverse urban scenes: intersections, pedestrian crossings, parked vehicles&lt;/li&gt;
&lt;li&gt;Automatic ground-truth labels for semantic segmentation, depth, and optical flow&lt;/li&gt;
&lt;li&gt;Scalable: thousands of labeled frames in hours&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Impact:&lt;/strong&gt;
SYNTHIA powered the Elektra perception pipeline, reducing the need for expensive field data collection and enabling systematic testing across conditions that are rare or dangerous to capture in the real world. Results were published at CVPR, ICCV, and ECCV. The dataset was licensed to Intel, Audi, and Huawei.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section-media"&gt;
&lt;div class="section-slideshow" id="synthia-slideshow"&gt;
&lt;div class="section-slideshow-container"&gt;
&lt;div class="section-slideshow-image active"&gt;
&lt;img src="synthia-360.png" alt="SYNTHIA daytime urban scenario"&gt;
&lt;/div&gt;
&lt;div class="section-slideshow-image"&gt;
&lt;img src="synthia-overview.png" alt="SYNTHIA multi-condition overview"&gt;
&lt;/div&gt;
&lt;div class="slideshow-nav"&gt;
&lt;button class="slideshow-dot active" onclick="currentSlide(1, 'synthia')"&gt;&lt;/button&gt;
&lt;button class="slideshow-dot" onclick="currentSlide(2, 'synthia')"&gt;&lt;/button&gt;
&lt;/div&gt;
&lt;div class="slideshow-caption" id="synthia-caption"&gt;SYNTHIA daytime urban scenario&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;script&gt;initSlideshow('synthia', 2);&lt;/script&gt;
&lt;hr&gt;
&lt;h2 id="publications--impact"&gt;Publications &amp;amp; Impact&lt;/h2&gt;
&lt;p&gt;The Elektra project generated &lt;strong&gt;20+ peer-reviewed publications&lt;/strong&gt; at top venues including CVPR, ICCV, ECCV, IEEE TITS, and IEEE T-IV. Key contributions include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Stixel-based 3D scene understanding&lt;/strong&gt; — efficient real-time scene representation&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;SYNTHIA dataset&lt;/strong&gt; — synthetic data for autonomous driving, widely used in the community&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Semantic segmentation&lt;/strong&gt; pipelines for urban scene understanding&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Domain adaptation&lt;/strong&gt; methods bridging synthetic and real data&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Legacy:&lt;/strong&gt; Elektra proved vision-centric autonomous driving is achievable in real urban conditions and produced benchmark datasets still used by the research community. Alumni of the team now work at leading autonomous driving companies worldwide.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="selected-videos"&gt;Selected Videos&lt;/h2&gt;
&lt;div class="video-links"&gt;
&lt;button class="video-link-btn" onclick="openVideoModal('tvZnN65jbCE', 'Autonomous Driving Demo')"&gt;▶ Autonomous Driving Demo&lt;/button&gt;
&lt;button class="video-link-btn" onclick="openVideoModal('FWM-5Ps8zFo', 'Elektra Project Overview')"&gt;▶ Project Overview&lt;/button&gt;
&lt;button class="video-link-btn" onclick="openVideoModal('7u-mMtm1Q9o', 'Person Detection')"&gt;▶ Person Detection&lt;/button&gt;
&lt;/div&gt;</description></item></channel></rss>