Stack
About
For The Times's analysis of the 2022 FIFA World Cup, R&D and The New York Times graphics desk collaborated on a series of interactive articles that enable viewers to see key plays in 3D. This project developed a complete workflow to create 3D models of athletes in action from a single photograph, enabling publication within hours of each match.
Technical Workflow: The process begins with journalists photographing matches in Qatar using high-speed bursts to capture key moments. We calculate the 3D camera position using standard goal and penalty area dimensions as geometric guides, then project the image onto 3D field geometry.
Machine Learning Integration: Each player is cropped from the photograph and processed through ICON, an open-source machine learning model from the Max Planck Institute that generates 3D player models from single images. The models are manually corrected against broadcast replays and additional photographs to ensure accuracy.
Collaborative 3D Publishing: The final stories are assembled using Threebird, our internal collaborative web-based tool for designing 3D interactive graphics. Built on Three.js and Theatre.js, Threebird allows multiple journalists to work simultaneously on 3D scenes, dramatically reducing collaboration friction compared to traditional 3D authoring software.
Visual Design: Players are represented as illustrated sketches rather than photorealistic figures to signal that these are analytical representations, not attempts at lifelike reconstruction. This approach maintains journalistic integrity while providing clear visual analysis.
The workflow enabled us to publish immersive 3D analysis within hours of each match, bringing contextual analysis to readers while the games remained fresh in their minds.
Additional articles
Credits
This series of stories was created and produced by Weiyi Cai, Kenan Davis, Chloé Desaulles, Alexandre Devaux, Or Fleisher, Malika Khurana, Eleanor Lutz, Allison McCann, Mark McKeague, Karthik Patanjali, Scott Reinhard and Bedel Saget.