Rust: Meet SwiftUI. Hello Allui!
Or: Wrapping gpui.rs and exploring what's possible when modern declarative UI meets high performance GPU-accelerated desktop development.
The Problem, In Code
Modern UI frameworks like SwiftUI and Jetpack Compose ship with high-level abstractions out of the box:
VStack(spacing: 16) {
Text("Welcome")
.font(.title)
Button("Get Started") {
// action
}
.buttonStyle(.borderedProminent)
}
.padding(24)
.background(.blue)
.cornerRadius(12)GPUI: the GPU-accelerated framework powering Zed - is powerful and performant, but it operates at a lower level of abstraction:
div()
.flex()
.flex_col()
.gap(px(16.0))
.child(
div().child("Welcome")
.text_size(px(28.0))
.font_weight(FontWeight::BOLD)
)
.child(
div()
.child("Get Started")
.bg(blue())
.px(px(16.0))
.py(px(8.0))
.rounded(px(6.0))
.on_click(|_, _, _| {
// action
})
)
.p(px(24.0))
.bg(blue())
.rounded(px(12.0))So here I was asking myself: Could this be lifted into a higher level abstraction with all the modern afforances for higher level UI frameworks?
Why Rust
I've been using Zed extensively for the past year, and its difficult to use it without at least having heard about gpui.rs. It's super fast and flexible, but the flex-based layout system felt verbose in ways that modern declarative frameworks have moved beyond.
I kept thinking: what if Rust had something closer to SwiftUI or Jetpack Compose? Not just the API surface, but the mental model: stacks, modifiers, composition. Being fairly well versed in SwiftUI's internals, I chose it as a reference and started experimenting - together with my favorite french colleague (Mr. Claude).
So I started small and tried building a VStack abstraction ontop of gpui. With spacing, alignment, and a few basic modifiers. If that worked and felt good, maybe the rest would do too?
SwiftUI Semantics, Rust Performance
The core idea was simple: bring SwiftUI's declarative API to GPUI, not as a shallow wrapper, but with the same semantic behavior. In SwiftUI, modifier order matters - .padding().background() is different from .background().padding(). Layout stacks have alignment and spacing. Components compose naturally.
Here's what that looks like in Allui:
VStack::new()
.spacing(16.0)
.child(
Text::new("Welcome")
.font(Font::title())
)
.child(
Button::new("Get Started", cx.listener(|this, _, _, _| {
// action
}))
.button_style(ButtonStyle::BorderedProminent)
)
.padding(24.0)
.background(Color::blue())
.corner_radius(12.0)If you know SwiftUI, you already know Allui. The names match. The behaviors match. Even the subtle details - like how modifiers wrap views in containers: A match.
The first VStack worked. Then HStack and ZStack. Then Text, Button, Spacer. Each new component validated me a bit more: GPUI's GPU-accelerated rendering + SwiftUI's ergonomic API = something that actually felt good to use.
Even at this early stage, I knew I was just getting started. A custom DSL could eventually make this feel even more idiomatic-closer to SwiftUI's syntax, but that would come later. First, prove the foundation works. Who knows, maybe we could delve even deeper into actual data-flow and navigation down the line? That would potentially be something legitimately new and wouldn't have me purely standing on the shoulders of giants (basically the entire team at Zed).
The Tangent: AI as an Amplifier
I've been following AI development for years. For a long time, coding assistants were decent at tab-completion but weak at generating anything beyond boilerplate. Where they excelled was information compression: I could learn unfamiliar codebases incredibly fast by asking questions and getting coherent summaries. Hallucination was much less of a problem when the AI was explaining existing code versus writing new code from scratch. And if it did end up making something up, I'd end up learning about it relatively quickly
This used to be my approach since,... well since ChatGPT hit the market really. That changed dramatically in November 2025 when Opus 4.5 launched.
ChatGPT 5+ was already quite good at understanding problems and creating plans, but Opus 4.5's reliable tool calling and performance at writing functional code was something else entirely. Suddenly, AI wasn't just a research assistant - it was a legitimate coding partner.
Claude Code was my entry point, but as I got deeper into Allui, I wanted more control. I needed to mix and match models, add plugins, and customize workflows. So I turned to Crush CLI and eventually settled on OpenCode, especially after discovering Oh My OpenCode.
The Laziness Problem
Here's the thing about LLMs - just like most humans: they're lazy. Unless you continuously push them, they'll take shortcuts. They'll hallucinate. They'll implement the happy path and skip edge cases. They'll write a partial solution and call it done.
This is where modern tooling like Ralph Wiggum and Oh My OpenCode's Sisyphus become incredibly helpful. These tools work around AI laziness by enforcing continuous iteration—plan, review, re-plan, implement, test, review again. No shortcuts. No "good enough for now."
The Allui Build Loop
Once I had the first few components implemented by hand, establishing the patterns for how modifiers wrap, how stacks compose, how stateful widgets integrate with GPUI, I could hand off to the agents:
- Plan - Design the next component (e.g., "Add a
LazyVGridwith virtualized scrolling and flexible column sizing") - Review - Agent proposes implementation following established patterns
- Re-plan - Refine based on what worked or didn't
- Implement - Agent writes the code
- Test - Run the storybook, verify behavior matches SwiftUI semantics
- Review - Use Greptile for continuous code review
Because the patterns were consistent and reference implementations existed in the codebase, agents could extrapolate reliably. I stepped back and acted as architect only. Reading and reviewing. Ensuring design coherence while the agents handled implementation.
This is how Allui quickly grew from a handful of layout primitive wrappers to 8,000+ lines of code over the course of a weekend: VStack, HStack, ZStack, ScrollView, List, LazyVStack, Grid, LazyVGrid, LazyHGrid, and a full suite of display and input components (Text, Button, Toggle, TextField, Slider, Picker, etc.).
It's not that any of this is an incredible accomplishment - no. That title belongs to gpui.rs and gpui-components crates. For me, however, it massively accelerates what I can do over the course of a weekend. With agents handling the mechanical work while tools like Sisyphus kept them going, I could focus entirely on design decisions and architectural direction.
Batteries Included (But Not the Whole Car)
Allui today is a mostly functional component library in early alpha stages. Most the basics you will need to build desktop UIs are present:
Layout primitives:
VStack,HStack,ZStack- Familiar stack-based compositionSpacer,ScrollView,List- Flexible spacing and containersLazyVStack,LazyHStack- Virtualized lists for large datasetsGrid,LazyVGrid,LazyHGrid- Static and virtualized 2D layouts
Components:
- Display:
Text,Button,Image,Label,Link,ProgressView,Divider - Input:
Toggle,TextField,SecureField,TextEditor,Slider,Stepper,Picker
Modifiers:
- Layout:
padding(),frame(),fixed_size(),aspect_ratio() - Visual:
background(),foreground_color(),corner_radius(),border(),shadow(),opacity() - Behavior:
hidden(),disabled(),on_tap_gesture()
Here's a real example - a settings screen with sections:
List::new("settings")
.list_style(ListStyle::inset_grouped())
.child(
Section::new()
.header("Account")
.child(Text::new("Profile"))
.child(Text::new("Privacy"))
)
.child(
Section::new()
.header("Appearance")
.child(
Toggle::new_with_handler(
"Dark Mode",
self.is_dark_mode,
cx.listener(|this, checked, _, cx| {
this.is_dark_mode = *checked;
cx.notify();
})
)
)
)What Allui Is NOT (Yet)
Let me be clear: Allui is not a complete SwiftUI-like system with state management and automatic re-rendering. There's no @State, no @Binding, no attribute graph driving reactivity.
State management and rendering are still handled by gpui.rs - you use Entity, Context, and cx.notify() just like you would in raw gpui.rs. Allui is a collection of higher-level, more familiar components that sit on top of that foundation.
The state management abstractions will come. But right now, I am focused on making the presentation layer ergonomic while gpui.rs handles the reactivity.
You can run the interactive storybook to see all components in action:
cargo run --example storybook --releaseWhat I'm Curious About Next
The component library works, and honestly, I'm having fun just playing around with what's possible. There are a few directions that excite me, though I'm not committing to any particular path yet.
State Management & Unidirectional Data Flow
gpui's Entity and Context system works, but it's verbose. Every state change requires explicit cx.notify() calls. There's no automatic dependency tracking, no declarative bindings.
I keep thinking about unidirectional data flow (UDF) patterns. On the web, we have Redux. In Swift, The Composable Architecture (TCA) and custom made unidirectiona architectures (which I wrote about in Why Your ViewModel Is Lying To You.
Could I build a reducer-effect system that sits on top of Allui? Something where state flows in one direction, side effects are isolated and testable, and the render pipeline just works?
I'm sure libraries like this already exist in Rust, but that's not really the point. I learn by building, and experimenting with these patterns in a new language sounds fun.
Attribute Graphs & Reactive Bindings
SwiftUI's secret sauce isn't just the API - it's the attribute graph underneath. State changes propagate automatically. Bindings are two-way but safe. The framework knows exactly what needs to re-render.
Could Allui have something similar? A system where state dependencies are tracked automatically and changes trigger minimal, precise re-renders? Certainly. I just don't know yet whether I can pull something like that off by myself over coffee and a few weekends.
Domain-Specific Languages
Even now, I wonder if Allui could benefit from a DSL. Something like:
VStack(spacing: 16) {
Text("Welcome")
.font(.title)
Button("Get Started") {
// action
}
.buttonStyle(.borderedProminent)
}
.padding(24)No .child() calls. No ::new(). Just declarative UI code that compiles down to the current Allui API. Basically it's all about making Rust feel more natural for UI development.
Playing Around aka FAFO
Look, I'm not promising any of this will happen. Right now I'm just enjoying the exploration. The initial steps - getting the components working, seeing SwiftUI patterns translate to Rust - have been genuinely exciting. Where it goes from here? We'll see.
What I do know is that there's room for a higher-level UI framework in Rust that doesn't rely on web technologies and comes with everything included. Whether Allui becomes that or just teaches me interesting things along the way, either outcome sounds good to me.
Platform Diversity in the Age of AI
Here's the thing that excites me beyond just the technical challenge: we're at a moment where desktop Linux could actually become a meaningful player in the computing ecosystem.
I've been following DHH's progression into Linux—starting with Omakub and now with Omarchy, which I use at home. There's real momentum building. People are tired of being locked into Apple's ecosystem or wrestling with Windows. They want alternatives that actually work.
But here's the problem: if you're a mobile developer who knows SwiftUI, and you want to bring your product thinking to the desktop (even better: Linux), what do you use? Electron? That means dragging Chromium into every app and watching your memory usage explode. So why not have some fun and try Rust instead?
AI Changes the Economics
And here's where AI matters: building something like this solo used to be way more than a weekend and some coffee. We are probably talking a good week or two. That's a significant speed up (10x anyone?).
AI doesn't make it trivial, but it changes the math. I can focus on architecture and design while agents handle implementation. What would have taken years might take months. What would have required a team can be done by one person with clear vision and the right tools.
This isn't unique to Allui - it's true for a lot of infrastructure work that was previously gated by sheer human-hours required. AI accelerates the transformation from "interesting idea" to "working software" in ways that weren't possible before.
Maybe that means more competition for platforms. More diversity in tooling. More experiments that actually ship.
Try It, Follow Along, or Just Watch
Where Allui Lives
Allui is on GitHub: github.com/antimateriallabs/allui-rs
Clone it, run the storybook, see if the patterns feel familiar:
git clone https://github.com/antimateriallabs/allui-rs
cd allui-rs
cargo run --example storybook --releaseIf you're a mobile developer curious about Rust, this might be a gentler entry point than starting with raw gpui.rs or other lower-level frameworks. The API should feel immediately familiar.
If you're interested in AI-assisted development, the AGENTS.md file documents the patterns and conventions that made agent collaboration effective. It's basically the architectural guide that let agents extrapolate reliably.
Want to Watch Me FAFO?
I'm not asking for contributors yet. I'm not promising a roadmap. Right now I'm just having fun exploring what's possible - building things, learning, seeing where it goes.
But if this resonates with you, follow along. Maybe you'll build something with it. Maybe you'll fork it and take it in a different direction. Maybe you'll just watch and see what happens.
Either way, I'll be documenting the journey here.