Interview with Oded Margalit, Head Scientist at NextSilicon

How to push the boundaries of hyper computing

In this interview, Oded highlights the Nextsilicon's contribution to the ODISSEE project through their Maverick. NextSilicon works closely with project partners to optimize both hardware and software, ensuring the Maverick platform meets the specific needs of HPC applications.

Can you introduce yourself and NextSilicon?

I’m Oded Margalit, the head scientist at NextSilicon. I have experience in both research and industry. At NextSilicon, we’re building the next generation compute acceleration platform called Maverick. If you’re familiar with CPUs, they’re slow but easy to program. GPUs are faster but less flexible, requiring programming in CUDA or similar languages. Then there are FPGAs, which are very fast but extremely difficult to program. NextSilicon’s Intelligent Compute Acceleration (ICA) platform combines the speed of FPGAs with the flexibility of CPUs. You can take your legacy source code, compile it, and run it on our platform.

Can you tell us more about the Maverick?

The Maverick is a dual-die design with 8 billion transistors in a 3-nanometer geometry. It’s mounted on a PCIe card that can be installed in servers. You can put several of these cards in a server to achieve very fast, low-latency networking and impressive compute power.

In what fields is your technology currently being used?

We focus on high-performance computing broadly. That includes AI and large language models, of course, but we don’t optimize only for inference or training. Because we support FP64 and a general programming model, Maverick is suitable for physics simulations, CFD, molecular dynamics, and any application that can be parallelized. If a workload is highly serial and cannot be parallelized, it’s naturally less suited, but anything that can be broken into parallel work is a good candidate.

How is NextSilicon participating in ODISSEE and what have you delivered so far?

As part of the project we shipped hardware to partners and ran onboarding sessions: we delivered four cards to each main partner site and held several days of training so researchers could run their existing software on Maverick. The goal is collaboration, partners bring domain knowledge and use cases from CERN, and we work together to integrate and optimize their codes on our platform.

« Partners bring use cases, and we work together to integrate and optimize their codes on our platform. »

« If something runs frequently, we can duplicate it to run thousands of times in parallel. »

Do users need to rewrite code to run on Maverick?

Maverick is designed to run legacy code with minimal changes, you can often run large codebases without zero-line rewrite. That said, to squeeze out top performance we collaborate with users: small annotations, for example, marking a loop as parallelizable, let us extract much more performance.
Our platform has an adaptive nature and optimise on the fly. If something runs frequently, we can duplicate it to run thousands of times in parallel. This optimisation happens dynamically as the code runs.

What are the benefits of participating in this ODISSEE project?

I’m a strong believer in NextSilicon and the Maverick. The ODISSEE project is an opportunity to showcase our technology to the world and collect use cases. CERN has a significant HPC center, and we’re excited to work with them. We’ve delivered two Maverick cards to CERN, and they’re running proof of concept tests. We hope that the results will show that our platform is faster, greener, and more efficient than others.

News and events

Follow us

This project has received funding from the European Union’s Horizon Europe research and innovation program under grant agreement N°101188332. This website reflects only the author's view and the Commission is not responsible for any use that may be made of the information it contains.