AustinIO > Giga Austin Nexus > Cortex AI Compute


Giga Austin Nexus


Cortex is the AI training program at Giga Austin and the substrate that makes the rest of the campus's autonomy and humanoid claims tractable. The factory builds the vehicles and humanoids; Cortex trains the neural networks that operate them.

As of Q1 2026, Tesla's Form 8-K confirms two operational AI training clusters on the campus, each with a distinct workload assignment.

Cluster Location on Campus GPU Footprint Primary Workload Status
Cortex 1 South end >100,000 H100e FSD & autonomous vehicles Production
Cortex 2 North end >130,000 H100e Optimus humanoid training Early Ramp

Cortex 1 came online during Q4 2024 with an initial 50,000 Nvidia H100 deployment, scaling to its current 100,000-class footprint through 2025 across H100 and H200 generations. Cortex 2 was redesignated from a previously-planned campus support facility in mid-2025; its first 250 MW phase activated in April 2026, with the full 500 MW capacity targeting mid-2026.

The two-cluster split is the architectural signal that matters strategically: Tesla is treating FSD and Optimus as distinct training workloads at sufficient scale to justify dedicated clusters for each.


Cortex 1: Dedicated to FSD and Autonomous Vehicles

Cortex 1 occupies a dedicated data center building on the south end of the Giga Texas property, adjacent to the main 10-million-square-foot vehicle factory. The cluster is the original AI training campus Tesla built at the site.

Its primary workload is Tesla's Full Self-Driving neural network — the model that runs on every Tesla vehicle hardware platform from HW3 through AI5 and that powers the Robotaxi service in Austin, Dallas, Houston, and the broader expansion footprint.

The cluster came online during Q4 2024 with 50,000 Nvidia H100 GPUs as its initial deployment, scaling through 2025 to a combined H100 and H200 footprint of approximately 100,000 GPUs. Tesla's Q1 2026 8-K filing reports the cluster as in production at "over 100k H100e" — H100-equivalent — reflecting the mixed-generation GPU population.

Cortex 1's training output enabled the architectural transition from FSD v11 to FSD v12 and powers the ongoing v14 release cycle. Tesla's official software release notes attribute a 5x training compute scaling to Cortex coming online — the step-function improvement that made the v12 rewrite tractable.

FSD Version Architecture
v11 Modular C++-based code
v12 End-to-end neural network (rewrite enabled by Cortex 1 compute)
v14 Current production release; ongoing iteration on Cortex 1

The cluster's specialization on vehicle autonomy is operationally significant. The workload is dominated by dashcam video processing from the global Tesla fleet, edge-case mining from real-world driving telemetry, and high-cadence training cycles tied to the v14 and v15 release schedules.

The facility's power footprint at full capacity is approximately 130 MW. The cooling architecture is itself a documented engineering feat — six chilled-water pipes feeding the GPU racks, four water tanks on the second floor of the building, and aircraft-propeller-class fans on the roof producing the visible heat exhaust that drone observers regularly capture.


Cortex 2: Dedicated to Optimus Humanoid Training

Cortex 2 sits on the north end of the Giga Texas campus, opposite Cortex 1, and is the larger of the two clusters by every measurement.

The site was originally permitted as a "Central Campus Support Facility" — three smaller buildings plus water storage — before Tesla redesignated it in mid-2025 as a single large-structure data center for the second-generation Cortex cluster.

Permits, foundation work, water service, and shell construction proceeded through late 2025 and the first quarter of 2026. The first 250 MW phase activated in April 2026, with Tesla's Q1 2026 8-K classifying the cluster as "Early Ramp" at over 130,000 H100-equivalent GPUs. The full 500 MW capacity targets mid-2026.

The cluster's primary workload is the Optimus humanoid robot training pipeline — fundamentally a different compute profile from FSD training.

Workload Type Cortex 1 (FSD) Cortex 2 (Optimus)
Primary training data Dashcam video at fleet scale First-person video from human task demonstrations
Network architecture Single end-to-end driving network Mixed perception, manipulation, and planning networks
Simulation load Modest synthetic augmentation Heavy — neural world simulator rollouts
Training loop type Supervised + imitation learning Supervised + imitation + reinforcement learning

The neural world simulator is the workload that most distinguishes Cortex 2 from Cortex 1. Tesla AI VP Ashok Elluswamy disclosed it at ICCV in November 2025. The simulator is itself trained on the same fleet video data and synthesizes high-fidelity training environments — a learned simulation rather than a hand-coded physics engine, generating thousands of synthetic training variations per day from real-world data feeds.

This workload profile is what justifies a dedicated cluster. Humanoid training requires simulation-heavy compute that vehicle training does not, and running the two workloads on shared hardware would force compromise scheduling that neither program can afford given Tesla's stated 2026-2027 timelines for both Optimus volume production and FSD v15 release.

Cortex 2 is the first Tesla compute facility designed from the start for the humanoid training profile at scale.


Two Closed Loops, One Campus

The structural significance of dedicating Cortex 1 to FSD and Cortex 2 to Optimus is that each cluster anchors its own closed-loop training architecture, with both loops running concurrently on the same campus.

The two loops share underlying fleet-data ingestion and over-the-air distribution infrastructure but produce model weights for different programs.

The FSD loop, anchored at Cortex 1, has four stages running continuously:

Stage FSD Loop (Cortex 1)
1. Data ingestion Global Tesla fleet (~4M vehicles in early 2026) uploads selected video clips and telemetry via OTA channel
2. Training Cortex 1 ingests fleet video, augments with synthetic data, runs neural-network training cycles (~70K GPU hours per FSD cycle, compressing weeks to hours)
3. Distribution Completed model weights pushed back to fleet over the air
4. Inference and feedback Vehicles run new models in the real world; edge cases the previous model handled poorly feed back to Cortex 1

The Optimus loop, anchored at Cortex 2, mirrors the same four-stage architecture with different inputs and endpoints:

Stage Optimus Loop (Cortex 2)
1. Data ingestion First-person video from human task demonstrations (data-collection operators with camera rigs), repurposed fleet vehicle camera footage, operational telemetry from deployed Optimus units
2. Training Cortex 2 training pipeline + neural world simulator workload synthesizing additional training scenarios
3. Distribution Updated Optimus model weights pushed via OTA to deployed humanoid fleet
4. Inference and feedback Optimus units in Fremont and (beginning summer 2026) Giga Austin generate sensor data, manipulation telemetry, edge-case footage feeding back to Cortex 2

The two loops are not isolated. Fleet video collected for FSD purposes also informs Optimus spatial understanding, and the world simulator trained at Cortex 2 generates synthetic environments useful to FSD as well. But the primary training compute and model-weight production for each program is anchored at its dedicated cluster.

Tesla's stated framing is that this dual-loop architecture is the company's primary competitive moat in embodied AI. No competitor operates a vehicle fleet at Tesla's scale. No competitor has a humanoid deployment population growing at the rate Tesla is targeting. No competitor has the on-site training compute to close both loops at the same campus.

The campus siting is what makes both loops tight: the manufacturing, the cars and humanoids rolling off the line, and the dedicated compute that trains them all sit on the same property and share the same fiber, power, and operations team.


The Dojo Wind-Down and the All-In Cortex Commitment

Cortex's role at Giga Austin became unambiguous in August 2025 when Tesla officially disbanded the Dojo project — the company's four-year effort to build a vertically-integrated AI training supercomputer using custom D1 silicon.

Musk publicly described Dojo as an "evolutionary dead end." The cited reasons were chip-development pace versus Nvidia's release cadence, performance-per-watt parity gaps as Nvidia's H100 and H200 platforms matured with mature InfiniBand and Ethernet fabrics, and the opportunity cost of building custom training silicon while vehicle programs, Optimus development, and energy products required engineering attention.

The strategic decision was to redirect resources into Cortex's GPU-based clusters and to focus Tesla's custom silicon work on inference rather than training.

Tesla's AI5 chip, taped out at Samsung in early 2026 with first samples produced in April 2026, is the practical expression of that pivot. AI5 is designed for both vehicles and Optimus, optimized for on-device inference workloads, and produced at fab partners (Samsung Taylor and ultimately the Terafab pilot fab on the same Giga Austin North Campus) rather than at a custom Tesla wafer line.

The end result is a cleaner architectural division.

Compute Tier Silicon Workload
Training (FSD) Nvidia H100/H200 at Cortex 1 Vehicle autonomy model training
Training (Optimus) Nvidia H100/H200 at Cortex 2 Humanoid model training
Inference (vehicles) Tesla AI5 (Samsung Taylor + TSMC Arizona) On-board FSD inference
Inference (Optimus) Tesla AI5 → AI6 (Terafab pipeline) On-robot humanoid inference

The Dojo wind-down also clarifies what Cortex 2 is and is not. It is not the spiritual successor to Dojo (that role would have implied custom silicon at the training tier). It is a parallel GPU-based cluster purpose-built for a workload — humanoid training — that Cortex 1 was not designed to handle at the scale Tesla now requires.

Detailed coverage of Tesla's data center architecture, deployment history, and the Dojo-to-Cortex transition is available at DatacentersX: Tesla Dojo to Cortex Deployment ↗.


Power, Water, and Substrate Dependency

The Cortex clusters are the campus's most exposed dependency on the Texas Energy Nexus.

At full Cortex 2 activation, the combined power draw of Cortex 1 and Cortex 2 reaches approximately 630 MW — making AI training the single largest electrical load on the campus, exceeding even the vehicle factory at full production.

Facility Power Draw
Cortex 1 ~130 MW
Cortex 2 (full design capacity) 500 MW
Combined Cortex load at Giga Austin ~630 MW

This load profile is what justifies the campus's behind-the-meter energy investments: the 30 MW rooftop solar array, on-site Megapack battery storage, and the broader integration with ERCOT that the Texas Energy Nexus pillar documents.

Cortex's water draw is similarly material. The chilled-water cooling architecture serving Cortex 1 alone requires the four water tanks on the building's second floor plus continuous makeup supply. Cortex 2's full 500 MW footprint scales the cooling load proportionally.

The campus's water authority coordination — a Travis County and Williamson County boundary issue documented in the Texas Triangle pillar — is partly driven by Cortex's demand signal.

The dependency is bidirectional. Cortex's load justifies the substrate buildout that the energy and water nexuses provide, and the substrate buildout is what makes Cortex's siting at Giga Austin viable rather than at a remote-site data center where the closed-loop architecture would not work.


Compute Capacity Snapshot (Q1 2026)

Cluster Primary Workload Location on Campus GPU Footprint Power Status
Cortex 1 FSD & autonomous vehicle training South end of main factory complex >100,000 H100e (mixed H100 / H200) ~130 MW Production (Q4 2024 initial deployment)
Cortex 2 Optimus humanoid training North end of campus, opposite Cortex 1 >130,000 H100e (Phase 1 activated April 2026) 250 MW activated, 500 MW design capacity Early Ramp (full capacity targeting mid-2026)

Capacity figures are from Tesla's Q1 2026 Form 8-K filing. H100e refers to H100-equivalent GPU count, reflecting the mixed-generation H100 and H200 population at both facilities. Workload assignments reflect the primary training mandate for each cluster; Tesla retains operational flexibility to schedule workloads across clusters as program needs require.


Outlook

The Cortex program's 2026 trajectory is defined by Cortex 2's full ramp and the operational implications of running over 230,000 H100-equivalent GPUs concurrently on one campus across two dedicated workloads.

The mid-2026 milestone — Cortex 2 reaching its full 500 MW activation — is the inflection point where Tesla's combined training compute moves into the small group of facilities globally with that scale of dedicated AI training capacity.

The 2026-2027 questions the program will resolve are operational:

Whether the FSD loop on Cortex 1 translates into FSD v15 release timing as Tesla has projected. Whether the Optimus loop on Cortex 2 meaningfully accelerates humanoid capability development at the cadence the 10-million-unit Austin factory ramp requires. Whether the 500 MW power profile is sustainable on ERCOT through Texas summer load peaks. Whether the cooling architecture scales without the bottlenecks that have constrained other hyperscale AI deployments.

The dual-cluster, dual-loop architecture's broader campus role is also worth noting. Cortex is the program with the most direct dependency on the Texas Energy Nexus and the most direct feedback loops to vehicle and humanoid production. It is the substrate that makes the four-program campus integration tractable.

If either loop falters, the recursive feedback that defines the EV and Optimus production pages slows on its respective side. If both compound at the cadence Tesla projects, the campus's autonomy and humanoid trajectories accelerate together at a pace no other operator can currently match.


Related Coverage

Texas Industrial Triad | Giga Austin Nexus (pillar) | Electric Vehicle Production | Optimus Humanoid Production | Terafab Pilot | DatacentersX: Tesla Dojo to Cortex Deployment ↗ | Texas Energy Nexus | UT Austin Nexus | Austin Strategic Capital Nexus | Austin Semiconductor Ecosystem