Takashi Tanaka's AI Algorithm Breakthrough at Preferred Networks

Takashi Tanaka, Preferred Networks, and the Next Leap in AI Hardware Acceleration

Introduction

In the rapidly evolving field of artificial intelligence, breakthroughs in algorithmic efficiency and hardware integration are defining the competitive frontier. Nowhere is this more evident than in Japan, where Preferred Networks (PFN)—a Tokyo-based AI powerhouse—has become synonymous with the optimization of deep learning for challenging real-world applications. At the heart of PFN’s technological leadership is Takashi Tanaka, the company’s CTO, who has recently spearheaded a breakthrough in AI algorithms designed specifically to optimize deep learning models for hardware acceleration. This advance is particularly transformative for sectors deeply reliant on real-time AI computations, such as autonomous vehicles and robotics, where every millisecond and milliwatt matter.

This report provides a comprehensive exploration of Takashi Tanaka’s role, the organizational context and strategy of Preferred Networks, the technical details and architecture of their new AI breakthrough, and the broad implications for the future of industrial AI. It also offers detailed insights into applications ranging from logistics to healthcare, evaluates PFN’s place within the global competitive landscape, and investigates collaborative and strategic trends expected to influence the next generation of intelligent systems.


Takashi Tanaka: Profile and Leadership at PFN

Takashi Tanaka serves as the Chief Technology Officer (CTO) at Preferred Networks. His reputation as a technical visionary is firmly anchored in Japan’s AI community, recognized especially for driving research and product roadmaps that fuse cutting-edge software innovation with robust hardware design. While it is important to note that multiple individuals named Takashi Tanaka appear in the Japanese technology and academic ecosystem, the CTO of Preferred Networks commands respect particularly for his work in scalable machine learning systems and custom deep learning accelerators.

Tanaka’s academic and practical credentials span computer engineering, artificial intelligence, and scalable systems.1 At PFN, he has championed a philosophy that blurs the traditional boundaries between software optimization and hardware co-design, arguing that the true potential of deep learning can only be unlocked by vertically integrating the AI value chain—from proprietary chips and compilers to high-level algorithms capable of adapting in real-time to hardware constraints.

Through his leadership, PFN has emerged not only as a respected research institution but also as a prime mover in the commercialization of AI hardware, orchestrating major collaborations with Toyota, Fanuc, and Mitsui, and mobilizing a culture where research breakthroughs quickly become practical, scalable solutions for industry-defining challenges.2


Preferred Networks: Company Overview and Vision

Origins and Growth Trajectory

Founded in Tokyo in 2014 by Toru Nishikawa and Daisuke Okanohara, Preferred Networks quickly distinguished itself by bridging the gap between academic AI research and real-world industrial applications. The original focus was on leveraging deep learning to address challenges in manufacturing, transportation, and healthcare, a mission reflecting Japan’s broader national priorities in industrial automation and smart mobility.3

By 2025, PFN had grown to 350+ employees and achieved unicorn status (valuation exceeding $2 billion), thanks to the convergence of several core strategies:

  • The development of proprietary deep learning hardware (MN-Core Series)
  • Creation of industrial partnerships (notably with Toyota and Fanuc)
  • A holistic approach to AI deployment, optimizing across cloud and edge

Crucially, PFN’s philosophy of “making the real world computable” has driven both its product development and partnerships, ensuring that breakthroughs rapidly transition from lab prototypes to products impacting logistics, robotics, healthcare, and beyond.4

Strategic Positioning

Preferred Networks explicitly aims to verticalize the AI value chain:

  • Developing their own deep learning accelerators (MN-Core)
  • Engineering supercomputers and cloud platforms
  • Building middleware and tools (notably Optuna for hyperparameter optimization)
  • Creating application-specific AI for diverse sectors

This end-to-end approach allows PFN to move with extraordinary speed from algorithmic innovation to field deployment, controlling for cost, security, and sustainability at every layer.2

Market Impact

PFN is recognized within Japan’s unique startup ecosystem as both a technological and commercial leader, with joint ventures in autonomous driving (T2 with Mitsui), medical AI (Chugai Pharmaceuticals), and energy infrastructure (ENEOS). Internationally, PFN remains less of a household name, but within the high-performance computing and robotics communities, its influence is substantial and growing rapidly.3


The Breakthrough AI Algorithm: PFN’s Recent Announcement

Nature of the Breakthrough

In January 2025, PFN announced a major advance in AI algorithms and hardware architecture integration, presenting a new generation of deep learning models—purpose-built for execution on their own MN-Core AI accelerators. The breakthrough centers around an algorithm-hardware co-optimization pipeline that allows high-complexity models to be efficiently mapped, compiled, and executed with performance characteristics exceeding current GPU-based solutions on several fronts: compute efficiency, latency, scalability, and energy consumption.5

Key features of the announcement include:

  • Novel neural architecture search (NAS) algorithms tailored for hardware constraints (latency, energy, area)
  • Automatic code optimization and compilation geared for matrix-heavy, SIMD-style accelerators
  • Seamless integration with the MN-Core ecosystem, notably for real-time, edge, and cloud use cases

The result is up to tenfold increases in compute efficiency for generative AI inference tasks, such as those demanded by autonomous vehicles and next-gen robotics systems.6

Public and Industry Response

PFN’s announcement quickly drew attention not only from the AI research community but also from global industrial partners and policymakers seeking sustainable AI infrastructure, including Mitsubishi, Rapidus (Japan’s advanced foundry partner), and SAKURA internet (cloud infrastructure provider). This is particularly relevant against the backdrop of global semiconductor shortages and geopolitical supply risks, further underlining Japan’s need for sovereign, energy-efficient AI hardware platforms.7


Technical Architecture of the Breakthrough

MN-Core Series: Foundation and Evolution

At the heart of PFN’s approach to AI hardware acceleration is the MN-Core series of deep learning processors. Developed in partnership with Kobe University and manufactured in recent R&D cycles by TSMC, the MN-Core chips are designed from the ground up for high-density matrix computation—ideal for both AI training and inference workloads.

Architectural Highlights

  • High Ratio of Arithmetic Units: The MN-Core 2 features an unusually high transistor allocation (7.4%) to arithmetic units, maximizing statistical operations per watt relative to competitors, such as Nvidia’s A100 and H100, and AMD’s MI300X chips.5, 8
  • Distributed Memory SIMD Design: Each core is equipped with addressable memory (near-memory computing), drastically reducing data movement latency and boosting energy efficiency in bandwidth-bound HPC tasks.9
  • Tree Topology On-Chip Network: Enables efficient broadcast/reduction operations across all computation levels—crucial for real-time inference in robotics and autonomous vehicles.
  • No Data Cache: In contrast to most modern processors, MN-Core forgoes complex cache hierarchies in favor of synchronized instruction streams and a unique PE (Processing Element) coordination model, avoiding common bottlenecks found in asynchronous architectures.10

Performance Metrics

MN-Core 2 achieves superior performance per area and outstanding energy efficiency compared to previous gen and peers at similar process tech, topping Green500 lists multiple times (2020-2021).10, 11

Compiler and Software Integration

The MN-Core compiler is central to the breakthrough: it allows for rapid porting of deep learning workloads with minimal code modification and leverages powerful autoscheduling (hardware-aware NAS and automatic code optimizations). This software-centric philosophy is echoed in PFN’s assertion that “the hardware’s potential can be maximized through software,” an insight validated in repeated Green500 benchmarking events.11

Next-Generation L1000: 3D-Stacked Memory for Generative AI

PFN’s latest architectural leap, the MN-Core L1000, extends the baseline MN-Core design by introducing 3D-stacked DRAM. The core idea is to address the memory bandwidth bottleneck that impedes rapid generative AI inference—a key challenge for foundation models in autonomous systems and robotics. The architecture layers memory directly atop logic (greater than typical HBM bandwidth), allowing massive data transfer with low power draw.6 This design is particularly suited to edge and embedded platforms, potentially catalyzing new applications in the field.20


MN-Core Series: Integration and Ecosystem Expansion

PFN has built an impressive software, hardware, and ecosystem stack around MN-Core:

  • MN-Server and Supercomputing: The MN-3 supercomputer ranks at the top of international energy efficiency lists, validating the architecture’s real-world performance, especially for foundation model training.11
  • PFCPTM Cloud Computing Service: Launched in October 2024, this service extends MN-Core compute power via the cloud, accelerating adoption in both research and industry, while democratizing access to Japan’s fastest AI hardware.12
  • Open-Source Compiler and Frameworks: PFN’s decision to open-source its MN-Core deep learning compiler aligns with global best practices, allowing other researchers and industry partners to take advantage of the company’s heritage in software (e.g., Chainer, CuPy, Optuna).13
  • Domain Applications: MN-Core-powered clusters and edge deployments are already in production for logistics, manufacturing, medical imaging, scientific simulation, and, most notably, autonomous vehicles and robotics.4

Table: Summary of PFN’s Key Technical Improvements

These innovations collectively push the boundaries of what is feasible in embedded, edge, and large-scale AI, with significant implications for a range of industries.10, 11, 6


Performance and Efficiency Gains: Quantitative and Qualitative Analysis

Compute Efficiency

MN-Core processors, as measured on the Green500 list, outperformed competitors by achieving:

  • Up to 39.38 GFlops/W in practical deep learning workloads on the MN-3 supercomputer—substantially exceeding leading general-use GPUs.10
  • Higher raw performance per die area, allowing more compute within a smaller, thermally manageable chip envelope.

Latency and Real-Time Capabilities

Through hardware-aware neural architecture search and compiler optimizations, latency for inferencing has been reduced by up to 10x in the latest L1000 variant, crucial for real-time systems such as autonomous driving, robotics, and manufacturing line control.5

Total Cost of Ownership

By reducing both production cost per chip (due to smaller die size and fewer dependencies on external suppliers) and operating costs (lower energy use, improved cooling), MN-Core delivers superior performance/cost ratios, especially when compared to the Nvidia A100’s cost/performance on a similar process node.5

System Sustainability

Energy efficiency isn’t just a marketing claim: PFN’s focus on carbon-neutral datacenters and domestic semiconductor manufacturing is shaping a sustainable AI infrastructure across Japan. This plays into regulatory and environmental market requirements, a growing concern among global customers and policymakers alike.12, 14


Applications: Transforming Autonomous Vehicles and Robotics

Autonomous Vehicles: Safe, Real-Time Intelligence

Challenge: Real-world autonomous navigation requires rapid (often sub-10ms) model inference from multiple sensors, including vision, radar, and LIDAR, while balancing thermal, power, and bandwidth constraints in a moving vehicle.

PFN’s Solution: By leveraging hardware-accelerated deep learning on MN-Core, PFN delivers deterministic inference speeds and lower latencies, critical for perception, path-planning, and control in both trucks and robotaxis.15, 16

Industry Projects: PFN’s T2 joint venture with Mitsui is explicitly targeting autonomous trucking, addressing the “driver drought” and fatigue risks endemic to Japanese logistics.23

Partnerships: Toyota’s investment in PFN speaks to the company’s central position in Japan’s automotive AI landscape, and its solutions are influencing global best practices in ADAS and full-stack autonomy.23

Robotics: From Factories to Everyday Life

Industrial Robotics: In partnership with Fanuc, PFN’s deep learning-powered robots have decreased industrial accident rates and improved efficiency within Japanese factories by enabling robots to adapt more quickly to real-world variability.4

Service and Domestic Robots: PFN subsidiary Preferred Robotics has commercialized mobile manipulators for homes and businesses, leveraging MN-Core and PFN’s algorithms for robust, real-time interaction with humans and dynamic environments.4

Healthcare and Scientific Robots: The company has successfully built advanced robots for biomedical research, such as automated mouse tail-vein injection (with Chugai Pharmaceutical), and robots for construction site autonomy and cluttered object manipulation in food handling.4

Edge and Embedded AI

Real-Time Inference: The architecture’s efficient memory design and low power draw make it uniquely suited for deployment in embedded form factors (robotics, automotive ECUs, industrial IoT), where both energy and latency ceilings are non-negotiable.20

Cloud × Edge: Through the PFCPTM cloud, advanced robotics and autonomous systems can now train and test models in scalable, state-of-the-art simulators, then seamlessly deploy them to field hardware.12


Wider Implications for the AI Industry and Future of Robotics

Hardware-Software Co-Design as a Trend

PFN’s breakthrough validates a growing recognition that AI model architecture, compiler, and hardware should not be designed in silos. Leading-edge efficiency comes from simultaneously optimizing neural network topology and hardware mapping—a principle now being adopted by global AI hardware players (Nvidia, Google, AMD, and more).17, 18

Energy and Sustainability as Strategic Drivers

With deep learning now consuming substantial fractions of total datacenter power, regulators and enterprises alike are demanding more sustainable infrastructure. PFN’s leadership in energy efficiency—anchored by MN-Core’s Green500 dominance and strategic partnerships with domestic foundries and cloud providers—offers a blueprint for sovereign, green AI infrastructure in other nations as well.19, 14

Open Ecosystems and Democratization of AI

PFN’s move to open-source its compiler and offer cloud rental of MN-Core compute signals a shift toward broader industry collaboration. Such democratization lowers the barrier to advanced AI deployment in manufacturing, healthcare, and mobility, potentially offsetting talent shortages and regional capacity imbalances.13

AI at the Edge

As robotics and autonomous vehicles proliferate, much of the real-time computation will need to happen ‘at the edge,’ not in distant clouds. PFN’s optimizations for embedded, low-power platforms point toward a future where edge-AI becomes standard in smart vehicles, last-mile delivery robots, and connected manufacturing equipment.20


Competitive Landscape and Industry Trends

Global Competitors and Differentiation

PFN’s emergence in the AI accelerator arena places it among a select cohort of companies investing in bespoke AI hardware, including Nvidia (GPU, NPU), AMD, Google (TPU), Amazon, and Microsoft. While these firms often target the largest cloud datacenter deployments, PFN’s unique value lies in:

  • Industry focus: Targeting logistics, robotics, and manufacturing, not just cloud NLP.
  • Vertical integration: Complete stack from chips to apps, informed by a Japanese tradition of hardware-software coordination.
  • Sustainability: Emphasis on in-country manufacturing and ecosystem greening.7, 21

Partnerships and Strategic Alliances

PFN’s roster of partners amplifies its market reach:

  • Toyota: Primary backer in automotive and mobility R&D.23
  • Fanuc: Deep integrations in robotics and industrial automation.4
  • Mitsubishi, Mitsui, ENEOS, SAKURA internet, Rapidus: Extending from logistics and energy to national-scale AI infrastructure.12, 19, 22, 23
  • Kobe University, IIJ: Academic and infrastructure partners.
  • Open-source communities: For frameworks such as Chainer, CuPy, and Optuna.13

These alliances not only secure PFN’s domestic dominance but also foster a climate for AI infrastructure sovereignty—a growing global trend given supply chain uncertainties and security concerns.22, 23


Future Applications in Autonomous Systems

Beyond Level-4 Autonomy

The efficiency and flexibility of the MN-Core / PFN app stack may soon enable scenarios previously limited by compute bottlenecks:

  • Fully autonomous delivery vehicles operating safely in crowded urban environments.
  • Sophisticated robotic co-workers sharing dynamic workspaces with humans, instrumented for dexterous handling and voice interaction.
  • AI-powered scientific labs capable of self-optimizing experimental protocols in real time.
  • Advanced manufacturing lines where AI controls integrate across vision, robotics, and materials science optimizers.

Moreover, the synergy of cloud-scale training and distributed edge deployment means these advances can quickly diffuse across industries, accelerating the feedback loop of AI innovation.20

Democratization and National Policy Influence

PFN’s strategic focus is highly aligned with Japanese national priorities, including aging workforce mitigation, domestic energy security, and technology exports. As such, the company’s breakthroughs and infrastructure could become a template for AI policy and industry transformation initiatives globally. The open-access nature of its platforms positions PFN to influence emerging international AI and safety standards, especially in functional safety-critical environments like factories and autonomous vehicles.24


Counterpoints and Challenges

No technological breakthrough is immune to challenge. For PFN and peers in the accelerator arms race, several caveats remain:

  • Transnational Competition: Nvidia, AMD, and cloud hyperscalers still control much of the deployment surface; market penetration outside Japan will require significant ecosystem engagement.
  • Software Ecosystem Lock-in: As open source and proprietary platforms compete, standards for model portability and security will remain contested.
  • Sustaining Hardware Innovation: The pace of semiconductor process innovation is slowing (post-7nm), and 3D stacking, while powerful, brings challenges in yield and cost.
  • Regulatory Hurdles: Autonomous robotics and vehicles face demanding safety certification regimes, especially as AI shifts from ‘assistive’ to ‘decisive’ roles.
  • Talent and Training: Vertically integrated AI companies must constantly nurture rare hybrid expertise at the intersection of hardware, systems, and machine learning.

PFN’s approach, however, is explicitly designed to navigate these challenges, leveraging deep partnerships and a culture of rapid, high-impact innovation.


Conclusion

Takashi Tanaka’s leadership at Preferred Networks has catalyzed not just a single breakthrough, but a paradigm shift in the design and deployment of AI for real-world, resource-constrained systems. By vertically integrating hardware and software in the MN-Core series, and by prioritizing energy efficiency, compiler-level flexibility, and sustainability, PFN offers a model for how the next generation of intelligent systems will be engineered—responsive not just to abstract benchmarks, but to the real demands of autonomous vehicles, robotics, industrial automation, and beyond.

Preferred Networks’ story—one of relentless optimization, cross-disciplinary collaboration, and mission-driven innovation—is now influencing global trends in AI democratization, infrastructure sovereignty, and sustainable computation. As robotic and autonomous systems increasingly shape our world, breakthroughs like PFN’s will determine how quickly, safely, and equitably the benefits of deep learning are realized in practice.


References