Geospatial Video Intelligence Hackathon
Build Video Intelligence for the World's Most Critical Geospatial Workflows
Join developers and university researchers for an intensive weekend of building production-ready video understanding systems for geospatial intelligence. This isn't about student demos—it's about solving the technical challenges that define modern geospatial operations: automated mapping, infrastructure monitoring, and multi-source intelligence at scale.
Why St. Louis: You're building in the epicenter of American geospatial intelligence—home to NGA's new $1.75B West campus and the nation's densest concentration of geospatial expertise. The systems you build this weekend could inform how satellites, drones, and ground sensors generate actionable intelligence.
Why Now: Most geospatial data exists as video—satellite feeds, aerial reconnaissance, surveillance streams—yet 90% of it remains manually analyzed or underutilized. Video foundation models purpose-built for spatio-temporal reasoning change that equation. This hackathon explores what's possible when video understanding meets geospatial workflows.
The Challenge: Four Tracks, Real Problems
Choose your track and build solutions that address actual production bottlenecks faced by defense contractors, energy companies, and geospatial operators:
🗺️ Track 1: Geospatial AI for Automated Mapping
Transform how feature extraction and mapping happens at scale:
Automated Quality Assurance for Open Spatial Data: Build video-powered validation systems that detect map errors, data breakage, and discrepancies in open geospatial datasets. Using Overture Maps Foundation's open map data as the authoritative reference layer, identify and surface mismatches between video-derived observations and the existing base layers—at machine speed, across the kinds of coverage gaps and stale records that manual review would take weeks to catch.
Spatial Data Enrichment using GeoAI: Automate the enrichment of open geospatial data using street-level and low-altitude aerial video to extract feature attributes at scale. For Overture Maps Foundation's sponsored track, two priority focus areas: (1) detect and extract business signage, names, and branding from video to validate and enrich Overture's existing Places data; (2) detect sidewalk presence, width, and condition to build out Overture's pedestrian network layer, benchmarking video-based extraction accuracy against still-imagery baselines to quantify the value of temporal context.
Multimodal Positional Accuracy: Automatically identify and geolocate map features from street level video to cross-reference against OpenStreetMap and Overture Maps data to generate positional accuracy assessments (RMSE, CE90).
Real-world application: NGA's director warned the agency would need 8 million analysts to manually process its growing geospatial data — yet 70% of the planet remains insufficiently mapped, and the military's open-source Releasable Basemap Tiles are only as current as their crowd-sourced inputs. Automated video AI can extract road conditions, infrastructure status, and building features invisible to satellites, closing the mapping gap 48x faster than human analysts at a fraction of the cost.
Sponsored by Overture Maps Foundation: All technical solutions and derived data from this track must be released as open data or open IP (software license: Apache 2.0, data license: CDLA 2.0). Overture's open map data will serve as the reference layer for validation.
⚡ Track 2: Energy Infrastructure Monitoring
Build intelligent systems for critical infrastructure analysis:
Drone-Based Pipeline Analysis: Process drone footage of oil, gas, and utility infrastructure to detect anomalies, assess condition, and identify maintenance needs—automatically flag potential failures before they occur
Utility Network Inspection: Analyze overhead power line video for equipment damage, vegetation encroachment, and compliance with safety standards—generate timestamped inspection reports
Agent Workflow Integration: Combine video analysis with structured data (maintenance logs, sensor readings, regulatory requirements) to create comprehensive infrastructure health assessments
Real-world application: Energy companies fly thousands of drone hours annually but lack systems to process footage at scale. Automated analysis could reduce infrastructure inspection costs by 60-80% while improving detection accuracy.
🔗 Track 3: Multimodal Geospatial Workloads
Process video alongside other intelligence sources:
Video + Structured Data Fusion: Combine video analysis with CSV/database records to correlate visual observations with operational data—identify patterns invisible in single-source analysis
Video + Document Intelligence: Merge video footage analysis with PDF reports, satellite imagery, and text intelligence to create comprehensive situational awareness products
Multi-Source Intelligence Applications: Build systems that synthesize video, geospatial databases, and unstructured text to answer complex analytical questions
Real-world application: Intelligence analysts spend hours manually correlating data from different sources. Multimodal systems could surface connections in minutes, enabling faster decision-making in time-sensitive situations.
🏭 Track 4: Advanced Manufacturing
Non-public sector applications with geospatial components:
Construction Site Monitoring: Analyze video from construction sites to track equipment deployment, detect safety violations, and verify progress against plans—generate automated compliance reports
Industrial Equipment Detection: Process facility video to identify specific tools, machinery, and personnel positions for safety monitoring and operational efficiency
Real-world application: Boeing and other advanced manufacturers spend significant resources on manual video review for quality control and safety compliance. Automated systems could reduce review time by 75% while improving detection rates.
What You're Working With
Technology Stack
TwelveLabs Video Foundation Models:
Marengo 3.0: Multimodal embeddings for semantic video search—find moments based on visual content, dialogue, actions, or context without manual tagging
Pegasus 1.2: Video-to-text generation for summaries, reports, chapters, and structured intelligence outputs
Available on Amazon Bedrock
Sample Datasets Provided:
Drone footage of infrastructure and pipelines
Aerial surveillance video from urban environments
Satellite video feeds with temporal progression
Construction site monitoring footage
Manufacturing floor operation videos
TwelveLabs API Documentation →
Prizes & Recognition
Significant cash prizes and extended API credits for winning teams.
Top teams will receive:
Cash prizes for 1st, 2nd, and 3rd place overall
Extended TwelveLabs API credits for continued development
Recognition from GeoSTL and partner organizations
Judging Panel: Your solutions will be evaluated by:
TwelveLabs technical leadership
GeoSTL representatives
Partner company executives from defense, energy, and manufacturing sectors
Event Details
📅 When: April 25-26, 2026 (Weekend)
📍 Where: T-REX Geospatial Innovation Center, Downtown St. Louis (39 N 10th St, St. Louis, MO 63101)
⏰ Schedule:
Saturday, April 25
9:00 AM: Registration & Welcome Coffee
9:30 AM: Opening Keynote + Challenge Introductions
10:15 AM: Team Formation & Technical Workshop
11:00 AM: Hacking Begins
12:30 PM: Lunch + Technical Office Hours
3:00 PM: Mid-point Check-in + Mentor Sessions
6:00 PM: Dinner On Your Own
Sunday, April 26
9:00 AM: Hacking Continues
12:00 PM: Lunch + Final Sprint
2:00 PM: Submissions Due
2:30 PM: Project Presentations (5 min each)
4:30 PM: Judging Deliberation
5:00 PM: Awards Ceremony + Closing Reception
🎯 Format: In-person collaboration with mentor support
👥 Expected Participation: 80-100 developers and researchers
🍕 Provided: All meals, snacks, beverages, and late-night fuel
Who Should Apply
This event targets two distinct but complementary communities:
Defense Professionals
This is your opportunity to explore cutting-edge video intelligence capabilities for mission-critical workflows. Build prototypes that could inform actual program requirements and deployments.
University Researchers & Students
Washington University, Saint Louis University, and other regional students: This is serious technical work with meaningful outcomes. The prize structure reflects that—competitive compensation for teams that build production-quality solutions. Parents expecting ROI on tuition? This is how you demonstrate it.
Common Ground: Both audiences share a commitment to technical excellence and understanding that geospatial intelligence requires precision, not approximation.
Why the T-REX Innovation Center?
The T-REX facility was purpose-built for this kind of collaboration. Located in downtown St. Louis with the National Geospatial-Intelligence Agency campus just miles away, it's where the geospatial community gathers to solve hard problems. GeoSTL manages the space specifically to advance St. Louis as the global center for geospatial technology.
Facility Features:
High-speed networking and compute infrastructure
Collaboration spaces designed for intensive development
Proximity to NGA and geospatial contractor offices
On-site technical support and security
GeoSTL Partnership: This event is organized with the nonprofit dedicated to bringing geospatial jobs, innovation, and workforce development to the St. Louis region.
Application Process
Space is limited to 80-100 participants to ensure high-quality collaboration and mentorship.
Applications will be reviewed on a rolling basis—early applications receive priority consideration.
What we're looking for:
Technical experience with video processing, geospatial analysis, or ML/AI systems
Understanding of production challenges in mapping, infrastructure monitoring, or intelligence workflows
Ability to build functional prototypes in 48 hours
Interest in deploying solutions at enterprise scale
Team formation encouraged (2-4 people per team optimal)
Application Requirements:
Brief technical background (200 words)
Preferred challenge track(s)
GitHub profile or portfolio link (optional but helpful)
Team members if pre-formed (teams can also form at the event)
Registration Deadline: April 15, 2026
Frequently Asked Questions
Can I participate remotely?
No. This is an in-person event designed for intensive collaboration. Remote participation reduces the quality of mentorship and team dynamics that make these events valuable.
What if I don't have a team?
Perfectly fine. Many participants arrive individually and form teams during the Saturday morning session. We facilitate team formation based on track interests and complementary skills.
Are meals provided?
Yes. All meals from Saturday breakfast through Sunday lunch are included. Dietary restrictions will be accommodated.
Can I work on multiple tracks?
You can explore multiple tracks, but final submissions should focus on one track for judging purposes. Depth beats breadth.
What if my solution uses proprietary data?
We'll provide sample datasets for all tracks. You can use your own data if it's unclassified and you have rights to share results publicly.
Contact
Event Organizer: James Le, TwelveLabs Developer Experience ([email protected])
Venue & Logistics: Kathleen Klein, T-Rex Innovation Center ([email protected])
GeoSTL Partnership: Molly Brady, Director of Engagement and Policy ([email protected])
This is your opportunity to build the geospatial intelligence infrastructure that could power operations for the next decade. See you in St. Louis.