Nvidia has unveiled Cosmos-Transfer1, a pioneering AI model designed to generate highly photorealistic world simulations for training robotics and autonomous vehicles. Available now on Hugging Face, this conditional world generation system addresses a long-standing gap in physical AI development: translating simulated training environments into reliable real-world performance. The release marks a significant step in Nvidia’s broader push to empower developers with tools that streamline the creation and testing of physical AI systems, marrying synthetic data with real-world dynamics to accelerate innovation across robotics and self-driving technologies.
Cosmos-Transfer1: Overview, design, and capabilities
Cosmos-Transfer1 stands out as a conditional, multimodal world generation model that can produce complex, controllable simulations based on a variety of inputs. The core capability is to generate world scenes that respond to multiple spatial control inputs drawn from different modalities. These modalities include segmentation maps, depth information, and edge boundaries, among others. Nvidia researchers describe Cosmos-Transfer1 as enabling highly controllable world generation and enabling world-to-world transfer use cases, such as Sim2Real, where simulation-trained policies and controllers are transferred with high fidelity to real-world deployment.
The model’s design integrates an adaptive multimodal control system that allows developers to weight different visual inputs differently across various parts of a scene. This means that, for a given frame, depth information might dominate in one region (such as a robot’s arm or a nearby obstacle), while edge information or segmentation might be emphasized in another region (like distant surroundings or textures). The net effect is a generated environment that preserves crucial aspects of the original scene while introducing natural variations that reflect real-world complexity. This nuanced control is crucial for creating training environments that are both accurate and diverse, enabling more robust policy learning and more reliable sim-to-real transfer.
In practice, Cosmos-Transfer1 supports a flexible design where input signals can be adjusted spatially; different regions of a scene can be governed by different inputs, allowing for targeted manipulation of the environment without sacrificing global coherence. The conditional scheme is described as adaptive and customizable, empowering developers to tune the influence of each conditional input at specific locations within a scene. This architectural choice is a major departure from traditional, monolithic generation approaches, which often apply uniform weighting across an entire image or sequence.
The immediate implication for developers is a more precise knobset for shaping synthetic environments. For robotics, this translates to maintaining exacting control over how a robotic arm appears, moves, and interacts with the environment, while simultaneously enabling a broader range of background contexts. For autonomous vehicles, Cosmos-Transfer1 makes it possible to preserve road layout, traffic patterns, and other critical structural cues while varying weather conditions, lighting, or urban settings to broaden the diversity of training scenarios. The ability to preserve essential physical dynamics while introducing realistic variations significantly enhances the quality and relevance of synthetic data used in model training.
Cosmos-Transfer1 is positioned as part of Nvidia’s ongoing effort to provide developer-friendly tools that reduce the time, cost, and data requirements associated with training high-performance physical AI systems. By enabling post-training integration into policy models, the technology offers a pathway to generate actions and behaviors without the need for expensive, manual policy design and extensive data collection. This potential for post-training utilization accelerates the development cycle, enabling teams to refine policies with synthetic data that remains faithful to real-world constraints and dynamics.
In empirical demonstrations, Nvidia researchers observed that incorporating Cosmos-Transfer1 into robotics simulations significantly enhances photorealism. The generated scenes gain richer details, more sophisticated shading, and natural illumination while preserving the core physical dynamics of robot movement. These improvements help close the gap between synthetic and real-world performance, reducing the risk that models trained in simulation will falter when exposed to unanticipated real-world conditions. For autonomous vehicles, the model’s capacity to preserve critical road structures while varying conditional aspects like weather and lighting helps practitioners explore edge cases and rare but consequential scenarios in a controlled, scalable manner.
Cosmos-Transfer1’s open release on a public platform aligns with Nvidia’s broader strategy to democratize access to advanced AI tools. By providing access to the model and its underlying code, Nvidia aims to empower a wider set of developers, researchers, and small teams to explore and implement sophisticated simulation capabilities without prohibitive resources. The release also complements Nvidia’s broader Cosmos ecosystem, reinforcing its mission to support physical AI developers with a suite of world foundation models designed to streamline the creation, testing, and deployment of physically grounded AI systems.
Adaptive multimodal control: Mechanisms and implications for realism
The heart of Cosmos-Transfer1 lies in its adaptive multimodal control system, which enables conditional inputs to influence generation in a spatially aware fashion. This means that the model can assign different weights to inputs such as depth maps, edge maps, and segmentation masks depending on where in the scene the input is most relevant. Such a mechanism is transformative for several reasons.
First, it enhances realism by allowing fine-grained control over where and how visual cues impact the generated world. In a robotic manipulation task, a developer may want to enforce precise representation of the objects of interest—the robot’s gripper, the target object, and immediate obstacles—while permitting more varied background textures and lighting in peripheral regions. The adaptive weighting ensures that the critical interaction zone remains highly accurate, while the surrounding environment can exhibit natural variation that enriches training data.
Second, this approach supports more robust sim-to-real transfer. By maintaining consistent representations of important scene elements across synthetic variants, policies trained in Cosmos-Transfer1 can better generalize to real-world settings where those elements remain fixed in their essential properties, even as other aspects of the scene change. For autonomous driving, this translates to preserving the geometry and semantics of the road, lane markings, and nearby vehicles, while introducing diversity in weather, time of day, and urban context. The result is a training corpus that captures the core dynamics of driving tasks while offering ample variation to prevent overfitting.
Third, the spatially adaptive scheme enables modularity in scene composition. Developers can target specific region-specific inputs to particular scene components, facilitating complex scene construction where certain elements adhere to strict specifications while others are procedurally varied. This modularity supports more efficient experimentation: teams can test how changes in one region affect policy performance without inadvertently altering other regions that must remain stable for evaluation continuity.
From a technical perspective, the adaptive multimodal control mechanism requires careful calibration of input influences across spatial coordinates. It involves designing a weighting strategy that can be learned or tuned to reflect task priorities, scene semantics, and desired variability. The researchers emphasize that the approach supports customization, making it suitable for a range of applications where different components of the scene carry different levels of importance for policy learning or evaluation. In robotics, this enables practitioners to keep critical kinematic and physical properties intact while allowing diverse contextual backdrops. In autonomous driving, it supports preserving road geometry and traffic dynamics while varying external conditions to test robustness under different operational envelopes.
The broader implication is a shift toward more intelligent, context-aware synthetic data generation. Rather than applying uniform perturbations or generative constraints across an entire scene, Cosmos-Transfer1 adapts the level of detail and emphasis according to the spatial map of importance. This reduces the risk of generating unrealistic edge artifacts or inconsistent physics in regions that matter most to a given task, while preserving creative diversity in less critical regions.
Bridging the gap between simulation and reality: From traditional methods to Cosmos-Transfer1
Traditional approaches to training physical AI systems have faced a persistent trade-off between realism and resource expenditure. On one hand, collecting large-scale real-world data is expensive, time-consuming, and sometimes impractical for covering the full spectrum of scenarios a system may encounter. On the other hand, reliance on simulated environments often leads to a “simulation gap”—a mismatch between synthetic data and real-world physics, textures, lighting, and variability that can degrade model performance when deployed outside the lab.
Cosmos-Transfer1 addresses this dilemma by enabling multimodal inputs to shape photorealistic simulations that retain essential real-world characteristics. The model leverages a rich array of cues—such as depth information, edge delineations, and segmentation—so developers can preserve the core structure and dynamics of a scene while injecting natural variations. This approach allows synthetic data to be both highly informative and computationally tractable, bridging the long-standing chasm between synthetic fidelity and real-world generalization.
In robotics, for instance, the ability to retain exact arm geometry and motion while varying the surrounding context creates a training environment that supports precise control and broad exploratory behavior. The model’s capacity to introduce varied lighting, shading, and background complexity without compromising the robot’s physical interactions helps ensure that learned policies are robust to a wide range of real-world conditions. For autonomous vehicles, preserving road geometry and traffic patterns while altering weather, lighting, and urban context enables the systematic exploration of edge cases that would be risky or impractical to encounter on real roads.
The adaptive multimodal control framework also offers a practical workflow for developers: by selecting relevant input modalities and assigning spatial weights, teams can tailor synthetic environments to their specific tasks. This means that data generation can be customized to the exact needs of a project, whether that entails focusing on precise manipulation tasks in robotics or evaluating the resilience of driving policies under adverse weather conditions. The result is a more efficient iteration loop, with faster experimentation cycles and clearer evaluation signals, ultimately accelerating the roadmap for physical AI deployments.
Cosmos-Transfer1’s methodology aligns with broader trends in AI where foundation models are specialized for real-world grounding. By focusing on world-level generation that respects physical constraints and scene semantics, Nvidia’s approach complements existing sim-to-real strategies such as domain randomization, cycle-consistent generation, and physically informed rendering. The combination offers a versatile toolkit for researchers and engineers, enabling them to craft training environments that balance realism, variety, and computational efficiency.
Applications in robotics and autonomous driving: Practical impact and policy implications
Cosmos-Transfer1’s capabilities translate into tangible opportunities across two of the most dynamic domains in AI: robotics and autonomous driving. In robotics, the model’s adaptive multimodal control facilitates the creation of highly realistic training scenes that preserve the essential dynamics of robot motion and interaction with objects. A policy model designed to govern a robotic arm can benefit from synthetic data that reflects precise object geometry, contact dynamics, and motion trajectories while still presenting diverse backgrounds and lighting conditions. Such richness enables the policy to generalize more effectively when transitioning from simulated tasks to real-world manipulation challenges, including grasping, assembly, and delicate handling tasks in variable environments.
From a policy development standpoint, Cosmos-Transfer1 offers the potential for post-training integration. A policy model can be trained using synthetic data that has been augmented through cosine-transfer-like methods and subsequently refined or adapted with real-world fine-tuning. This can reduce the cost, time, and data requirements associated with manual policy training, enabling faster deployment cycles and more frequent policy updates as new hardware configurations or tasks emerge.
In autonomous driving, the model supports the creation of varied yet realistic driving scenarios that preserve critical road structures and traffic semantics. Practitioners can generate training sets that reflect a broad spectrum of weather conditions, lighting scenarios, urban layouts, and road geometries. Importantly, the generated scenes maintain the essential spatial relationships and dynamics of traffic participants, which helps ensure that driving policies learn robust reaction strategies for rare but high-stakes events. The model’s ability to tailor visual inputs across spatial regions means developers can stress-test specific components, such as perception in challenging lighting or complex intersections, without sacrificing the integrity of the core driving scenario.
Cosmos-Transfer1 has already demonstrated value in robotics simulation testing by enhancing photorealism and maintaining physical dynamics. The improved realism helps generate more informative training signals, enabling better policy learning and more reliable performance estimates. In autonomous vehicle development, the capacity to maximize the utility of real-world edge cases—scenarios that are rare but critical—reduces the likelihood that a system will fail in production due to unseen corner cases. By enabling a broader and more controlled exploration of such scenarios in simulation, developers can validate and strengthen safety margins before real-world deployment.
The broader implication for industry is a potential acceleration of robotics and autonomous system development. As companies across manufacturing, logistics, transportation, and consumer electronics invest in more capable autonomous systems, high-fidelity synthetic data becomes an essential ingredient in building robust, scalable AI solutions. Cosmos-Transfer1 contributes to this trajectory by offering a powerful, customizable toolset for producing realistic training environments that align with real-world physics and dynamics, while also enabling rapid experimentation and iteration.
Nvidia Cosmos: The broader ecosystem and world foundation models for physical AI
Cosmos-Transfer1 is a component of Nvidia’s broader Cosmos platform, a suite of world foundation models (WFMs) crafted specifically to support physical AI development. The platform includes Cosmos-Predict1 for general-purpose world generation and Cosmos-Reason1 for physical common-sense reasoning. Nvidia characterizes Cosmos as a developer-first platform designed to help physical AI developers build their systems more efficiently, benefiting from a suite of pre-trained models and accompanying training scripts.
The Cosmos platform is offered under a combination of licensing terms designed to balance openness with the protection of intellectual property. Pre-trained models are available under the Nvidia Open Model License, with training scripts released under Apache 2. This licensing approach aims to foster a collaborative ecosystem that accelerates innovation while maintaining clear usage terms for both researchers and organizations integrating the models into commercial pipelines.
From a strategic perspective, the Cosmos ecosystem positions Nvidia to capitalize on the expanding market for AI tooling designed to accelerate autonomous system development. Industries ranging from manufacturing to transportation are increasingly investing in robotics and autonomous technologies, and Cosmos provides a coherent stack of models and tools to accelerate development cycles, reduce duplication of effort, and standardize best practices for physical AI. The ecosystem’s emphasis on world-level models—designed to reason about and generate environment-scale representations—addresses a key facet of physical AI that complements foundational perception and control models used in robotics and autonomous driving.
For developers, Cosmos offers a centralized way to access a growing library of capabilities for world generation, reasoning about physical environments, and predicting scene dynamics. The platform’s open-release strategy further broadens access, enabling smaller teams and independent researchers to experiment with state-of-the-art simulation technologies and contribute to the evolving landscape of physical AI development. This democratization aligns with Nvidia’s broader community-building goals, creating fertile ground for collaboration, innovation, and accelerated progress in simulation-based training and evaluation.
Real-time generation and hardware scaling: Performance milestones
A notable highlight of Cosmos-Transfer1 is its demonstrated performance in real time when run on cutting-edge Nvidia hardware. Nvidia researchers showcased Cosmos-Transfer1 operating in real time on their latest hardware configurations, including a GB200 NVL72 rack. An inference scaling strategy was developed to achieve real-time world generation, delivering a dramatic improvement in throughput as compute resources scale up.
The performance gains observed in the experiments were significant. Scaling from a single GPU to a 64-GPU setup yielded approximately 40 times faster processing. This acceleration enabled the generation of five seconds of high-quality video in roughly 4.2 seconds, which is effectively real-time throughput at scale. The capability to generate long, coherent, and photorealistic sequences at real-time speeds addresses a critical industry bottleneck: simulation speed. Real-time generation enables rapid testing and iteration cycles, enabling developers to test, refine, and validate autonomous systems with a pace closer to real-world development timelines. Faster simulation means more data, more diverse scenarios, and more robust evaluation, all of which contribute to shorter product development cycles and faster deployment.
The hardware-driven speedups also have implications for research and education. As more researchers gain access to powerful generation capabilities, they can experiment with larger-scale world models, more intricate scene compositions, and longer sequences without prohibitive time costs. This can accelerate the exploration of new architectures, training paradigms, and evaluation methodologies, ultimately hindering bottlenecks that have historically limited progress in physical AI.
Real-time generation at scale also highlights memory and bandwidth considerations. Handling high-fidelity rendering, long sequences, and multimodal inputs requires careful data management, efficient model architectures, and optimized pipelines. Nvidia’s demonstration underscores the importance of hardware-aware design and software optimization in achieving practical, real-world capabilities for synthetic data generation. The practical takeaway for developers is clear: to leverage Cosmos-Transfer1 effectively, projects should plan for access to scalable compute resources and optimized data pipelines that can keep pace with the demands of real-time world generation.
Open-source release and industry impact: Democratizing advanced AI for developers
Nvidia’s decision to publish Cosmos-Transfer1 and its underlying code on a public platform embodies a broader strategy to democratize access to advanced AI tools. By releasing both the model and its code, Nvidia lowers barriers to entry for developers around the world, enabling smaller teams, startups, and independent researchers to experiment with state-of-the-art simulation technologies that were previously accessible mainly to larger organizations with substantial computational resources.
This open-release approach aligns with Nvidia’s broader aim of expanding developer communities around its hardware and software offerings. By making advanced tools more widely available, the company broadens its ecosystem, potentially accelerating progress in physical AI development as more participants contribute ideas, optimizations, and complementary solutions. The open-source model enables broader collaboration, peer review, and iteration, with the potential to drive rapid improvements and new use cases across industries.
For robotics and autonomous driving engineers, access to Cosmos-Transfer1 can shorten development cycles by providing more efficient, high-fidelity training environments. Teams can create diverse training scenarios with greater efficiency, test policies under a wider array of conditions, and validate system performance in synthetic settings before proceeding to real-world testing. The practical impact of open access includes faster prototyping, more thorough testing, and reduced exploration costs, enabling organizations to push product development timelines forward while maintaining rigorous performance standards.
However, while open-source tools democratize access, they do not eliminate the need for expertise or substantial computational resources. Effective use of Cosmos-Transfer1 requires a solid grounding in simulation, rendering, and AI training workflows, as well as access to sufficient computing infrastructure to scale generation tasks. The broader takeaway is that open release expands the community and capabilities, but success still hinges on technical skill, infrastructural capacity, and disciplined integration into development pipelines.
The broader industry impact of this release is likely to be multi-faceted. Companies across manufacturing, logistics, and transportation are actively pursuing robotics and autonomous technologies, and Cosmos-Transfer1 offers a robust, scalable means to accelerate development. By providing a practical, high-fidelity toolset for synthetic data generation, Nvidia enables a broader set of players to push the boundaries of what is possible in physical AI. The potential benefits include shorter development cycles, improved model robustness, and more efficient exploration of edge cases and rare scenarios that are critical to safe, reliable deployment.
Beyond the technical merits, the open release may encourage a more collaborative environment where researchers share best practices for leveraging multimodal inputs, spatial weighting strategies, and scene generation workflows. The resulting knowledge exchange can spur new methodologies, optimization techniques, and evaluation metrics tailored to multimodal world generation, further advancing the field of physical AI.
Real-world implications: Market context, adoption, and strategic considerations
The introduction of Cosmos-Transfer1 arrives at a moment when industries are increasingly investing in robotics and autonomous systems. Manufacturing, logistics, transportation, and other sectors are prioritizing automation and intelligent systems to improve efficiency, safety, and reliability. Tools that streamline the development, testing, and deployment of physical AI have substantial market appeal because they can shorten time-to-market, reduce costs, and improve system resilience across complex operational environments.
Nvidia’s Cosmos platform, with Cosmos-Transfer1 as a key component, positions the company to capitalize on this growing demand for end-to-end AI-enabled simulation and deployment workflows. The combination of high-fidelity world generation, adaptive multimodal control, and real-time performance on scalable hardware makes it possible for developers to create, test, and refine physical AI solutions within a cohesive ecosystem. The platform’s emphasis on world foundation models aligns with industry trends toward modular, reusable AI assets that can be composed to address diverse tasks—ranging from object manipulation in factory settings to complex perception and control challenges in autonomous driving.
From a strategic standpoint, the release reinforces Nvidia’s role as a leading enabler of AI-enabled automation. By providing the tools that streamline simulation-based training and evaluation, Nvidia helps lower the barriers to entry for new players seeking to adopt physical AI technologies. This capability can spur more startups and research teams to embark on robotics and autonomous driving projects, contributing to a more dynamic and competitive landscape. It also encourages established industry players to integrate advanced synthetic data generation into their pipelines to improve model robustness and safety.
The broader market implications extend to education and research as well. Universities and research institutes can access sophisticated simulation capabilities to teach, explore, and validate physical AI concepts without prohibitive costs or specialized infrastructure. The open-release approach facilitates hands-on experimentation, potentially accelerating the pace of discovery and innovation in the field.
Open-source availability also encourages cross-industry collaboration, enabling practitioners to share datasets, scenarios, and evaluation methodologies built with Cosmos-Transfer1. As the community grows, best practices for multimodal world generation, evaluation metrics for sim-to-real transfer, and standardized benchmarks may emerge, further accelerating progress and establishing common standards for physical AI development.
Technical and ethical considerations: Safety, bias, and responsible deployment
As with any powerful AI tool, the deployment of Cosmos-Transfer1 requires careful consideration of safety, reliability, and ethical implications. The realism and controllability of synthetic environments raise questions about how synthetic data influences policy learning, policy alignment with real-world safety requirements, and the potential for unintended biases to propagate through training pipelines.
One area of focus is ensuring that generated scenes remain faithful to physical laws and task-specific constraints. The adaptive multimodal control system must be calibrated to preserve critical dynamics in ways that reflect safe and reliable operation. In robotics, this includes maintaining accurate kinematics, collision physics, and contact dynamics, even as the environment varies. In autonomous driving, it involves preserving road geometry, traffic semantics, and vehicle interaction dynamics while varying environmental conditions in ways that do not misrepresent safety-critical cues.
Another consideration is the ethical use of synthetic data for policy learning. While synthetic environments can improve safety and reduce real-world risk, there is a risk that overreliance on synthetic data could mask real-world edge cases that only appear under rare conditions. Therefore, practitioners should balance synthetic data with controlled real-world testing, ensuring validation in realistic settings remains a core component of deployment strategies. Transparent documentation of data generation processes, scenario selection criteria, and evaluation metrics can help maintain accountability and reproducibility across teams.
Computational resource requirements are also a practical concern. The demonstrated real-time performance at scale implies access to substantial hardware resources, which may be beyond the reach of smaller teams or institutions. While open-source access lowers some barriers, effective use still presumes availability of adequate compute and storage, as well as expertise in optimization and distributed training or inference pipelines. This highlights a broader theme in AI tool adoption: access to cutting-edge capabilities often goes hand in hand with the need for organizational capacity to deploy, manage, and maintain sophisticated systems.
From a governance perspective, organizations using Cosmos-Transfer1 should consider compliance with licensing terms, data privacy considerations if synthetic pipelines are integrated with real-world datasets, and policies governing the deployment of autonomous systems trained with synthetic data. Thoughtful risk assessment and governance frameworks can help ensure responsible use and alignment with safety, privacy, and societal impact objectives.
Future outlook: Adoption, integration, and ongoing innovation
Looking ahead, Cosmos-Transfer1 signals a trajectory toward increasingly capable and accessible simulation-based training for physical AI. As developers gain hands-on experience with adaptive multimodal world generation, techniques for weighting inputs across spatial regions are likely to mature, enabling even more precise and expressive scene control. We can anticipate refinements in how segmentation, depth, edge information, and potentially other modalities are integrated to produce scenes that balance realism, diversity, and computational efficiency.
The integration of Cosmos-Transfer1 with broader development pipelines is also expected to deepen. Teams may adopt end-to-end workflows in which synthetic data generated through Cosmos-Transfer1 feeds into policy learning, evaluation, and deployment stages, coupled with real-world data collection and fine-tuning. Such integrated pipelines can shorten iteration cycles, improve the fidelity of simulations, and enhance the overall robustness of robotic and autonomous systems.
Continued open-source collaboration is likely to yield community-driven improvements, including new scoring metrics for sim-to-real transfer, additional scene templates, and optimized inference strategies for diverse hardware configurations. The ecosystem around Cosmos could expand to include more partner models, extended licensing options, and a growing library of world templates that cover a broad spectrum of industrial and consumer applications.
As with any rapidly evolving technology, ongoing research will undoubtedly explore potential limitations and improvements. Areas of future work may include further enhancing the realism of dynamic lighting and atmospheric effects, enabling even more nuanced weather patterns, and refining the preservation of delicate physical cues that govern interaction dynamics in complex scenes. Researchers will also explore ways to quantify the benefits of adaptive multimodal control in terms of policy performance gains, data efficiency, and training stability, providing clearer guidelines for practitioners on how to calibrate inputs for different tasks.
The commercialization and widespread adoption of Cosmos-Transfer1 and related Cosmos tools will depend on continued performance demonstrations, ease of integration, and the perceived value of synthetic data in real-world deployment. As industries embrace digital transformation and pursue smarter, safer, and more autonomous systems, Cosmos-Transfer1 stands as a foundational technology that can accelerate progress, reduce risk, and enable more ambitious experimentation in physical AI.
Conclusion
Cosmos-Transfer1 represents a landmark development in the domain of physical AI, offering a highly controllable, multimodal framework for generating photorealistic world simulations. Its adaptive weighting of inputs across spatial locations enables nuanced scene generation that preserves critical dynamics while introducing natural variability, supporting more robust sim-to-real transfer for robotics and autonomous driving. By integrating this model within Nvidia’s Cosmos ecosystem and releasing it as open-source, Nvidia is expanding access to advanced simulation capabilities and fostering a global community of developers and researchers dedicated to advancing physical AI.
The model’s real-time performance on scalable hardware demonstrates that high-fidelity world generation can keep pace with the fast-moving demands of modern AI development. This capability underpins faster iteration cycles, more comprehensive testing, and a clearer path from synthetic training to real-world deployment. The broader implications for industry include accelerated innovation across robotics, transportation, and manufacturing, as well as a shift toward more efficient, data-rich training pipelines that can adapt to emerging tasks and environments.
As with all powerful tools, responsible use will require careful attention to safety, governance, and ethical considerations. While Cosmos-Transfer1 lowers barriers to building sophisticated simulations, practitioners must balance synthetic data with real-world validation, maintain transparent documentation, and implement rigorous evaluation standards to ensure that policy learning translates into safe, reliable, and beneficial physical AI systems. With continued research, community collaboration, and hardware-enabled scaling, Cosmos-Transfer1 and the Cosmos platform are well-positioned to shape the next generation of intelligent, autonomous machines.