Synthesis AI, a pioneer in synthetic data technologies, today released Synthesis Humans and Synthesis Scenarios, two new products that represent the broadest and most comprehensive offering of human-centric synthetic data. Through a proprietary combination of generative AI and cinematic CGI pipelines, the Synthesis platform can programmatically create massive amounts of perfectly tagged imagery at orders of magnitude faster and at lower cost compared to current approaches.
The new offerings serve as an extension of the company’s data generation platform and aim to make the creation and deployment of synthetic data more seamless than ever before, further solidifying Synthesis AI’s position as a leader in synthetic data.
“Synthesis Humans and Synthesis Scenarios are a natural evolution in our synthetic data roadmap,” said Yashar Behzadi, CEO and Founder of Synthesis AI. “Synthetic data powered by generative AI is now considered a more efficient paradigm for building computer vision AI. Our new products will enable the development of more sophisticated multi-human AI models that are essential for new applications.”
Synthesis Humans enables machine learning (ML) practitioners to build more sophisticated, production-scale models that give them over 100,000 unique identities and the ability to change dozens of attributes, including emotions, body type, clothing, and movement. An intuitive user interface (UI) allows developers to quickly create labeled data, and a comprehensive API supports teams that prefer programmatic access and control.
Synthesis Scenarios is the first product to enable fine-tuning of complex multi-human simulations in a variety of ML modeling environments. Industry-leading software, consumer, augmented reality (AR), virtual reality (VR), autonomy, teleconferencing, and metaverse companies are currently using the Synthesis AI platform to build more robust and high-performing models. With the new capabilities in Synthesis Humans and Synthesis Scenarios, Synthesis AI supports the development of ML models for a wide range of existing and new applications, including:
ID Verification: Synthesis Humans provides various data in a fully privacy-compliant manner, mitigating the privacy and regulatory restrictions associated with obtaining facial data. The feature will enable more robust and less biased ID verification models for use cases that include smartphones, online banking, and contactless ticketing, among others.
AR/VR/Metaverse: AR/VR and Metaverse applications rely on photorealistic capture and replication of people in the digital realm. Developing avatars and building these core ML models requires vast amounts of disparate, labeled data. Synthesis Humans offers richly labeled 3D data for the broadest set of demographics available. Synthesis Scenarios supports the development of multi-person tracking and interaction models.
Virtual Try-On: New virtual try-on technologies are emerging to provide immersive and personal customer experiences. Synthesis Humans provides 100,000 unique identities, dozens of body types, and millions of clothing combinations to enable ML engineers to develop robust models of human body shape and posture.
AI Fitness: Synthesis Humans delivers massive amounts of detailed human body motion data, including body types, camera positions, environments, and exercise variations, to accelerate the development of new AI fitness applications.
Driver and passenger monitoring: Recent EU regulations have prompted car manufacturers, suppliers and AI companies to build computer vision systems to monitor a driver’s condition and help improve safety. Synthesis Humans can accurately model drivers, key behaviors, and cabin environments to enable the cost-effective and efficient development of higher-performing models.
Teleconferencing: With the rise in telecommuting, employees depend on high-quality video conferencing solutions to remain productive. Leading companies use synthesis scenarios to train new ML models to improve video quality and the overall conference call experience.
Pedestrian Detection: Safety is paramount to the deployment and widespread use of autonomous vehicles. The detailed multi-human simulation of Synthesis Scenarios enables the development of more precise pedestrian detection