System76 Transmission Log: Linux Hardware, COSMIC News and Enhancing AI with View.IO

3, 2, 1, 0.

Welcome aboard the System

76 transmission Log.

Our broadcast is about to begin.

This is the latest on System 76
computers, manufacturing, and pop OS.

Now for your in-orbit crew.

You just got back from
Linux Fest Northwest, right?

How was that?
I did.

It was super fun.

That is one of our favorite events of
the year because it is so Linux festy.

It is how a fest should be.

It's just super fun and
care free and really relaxed and informal

and just full of people that know a lot
of really important stuff about Linux.

It is a festive fest of fantastic people.

We had Carl doing a cosmic DE
presentation again.

A year ago, he presented, we just launched
the Alpha, and now we're in Alpha7, and

he's showing off all the accessibility
settings and the window changes,

and There's just so much progress that
has been made this whole year.

It's really fun to
see all the excitement and

the standing room only presentation room
packed with people and just

the hype that comes along with it helps
fuel the development for the future.

There's so much excitement.

There's definitely no stopping
cosmic development right now.

Absolutely not.

It feels so different than when
we were using it a year ago.

Guys, did or did not have a barbecue?

We had an unbarbecue It was not raining.

We thought it could possibly rain, and so
we ordered sandwiches and sides instead of

renting a grill and setting up everything
in the rain and cooking it in the rain.

As much as fun as that was, the sandwiches
were a little bit easier to

manage if it was going to rain.

But it ended up being the
most beautiful day ever.

And it turned into a little picnic in the
grass with people eating

the sandwiches and chips and everything.
It's good.

I feel like that Nicks
make food taste better.

Really?
Yeah.

You got the breeze, you're surrounded by
nature, you got a nice sandwich

that doesn't have sand in it.

So I think it's a
good environment for meals.

No one brought their own
barbecue sauce, though?

I did not supervise any
sauce applications.

You did start a new tradition, didn't you?
Yes.

System 76 and Jupiter broadcasting
folks are going to be listening

for birthdays next year.

So if anyone has a birthday going to the
fest, make sure you say it out loud and

let someone on one of those teams
know about it, because

we will be throwing a birthday party
for whoever has a birthday next year.

And if no one has a birthday,
the cake will just go to waste.

Someone will have a birthday.

We will celebrate.

On the cosmic news, let's go over a
little bit what's new one year later.

We did release Alpha7 pretty recently.

Now, we've got pinned workspaces, some
accessibility features, a lot

of other features to cover.

You can read all that in
the cosmic Alpha7 blog.

I do have a couple of corrections
from that blog, actually.

So a workspace naming, some people
saw that written in the blog.

That's not That's not
actually a thing yet.

It was in there by a mistake and will
probably come around

after the first release.

And the tool tips.

There were multiple things listed
that tool tips are functional with.

It's actually currently only functional
with the train and the

dock, but they are there.

Just wanted to make sure those
corrections got out there.

Thanks, Alex.
Let's talk about some hardware.

The Bonobo is back with a vengeance.

Part of that vengeance is it's now got a
max RAM of 192 gigabytes, which is a lot

of RAM and probably enough
for multiple vengences.

But it's also become quite the
observatory of a laptop, Emma.

It's got an 18-inch display,
1610 2K or 1610 4K at 240 hertz

or 200 hertz, respectively.

And that's powered by a brand
new NVIDIA, RTX 5080 or 5090 GPU.

So you get a ton of power out of
this thing, and it It looks gorgeous.

Yeah, it is a beast of a laptop.
Super beast.

Another little beast ready for the public.

It's the new Serval BUS laptop.

That one has a 24 core ultra-9 processor
and also has a 1610 2K

display at 240 hertz, but it's a 16-inch
display, so a little bit smaller than the

Bonobo, and also with
Nvidia, our TX50 set.

So we made a little crime scene theme
marketing campaign to

show off how fast this guy is, and
hope you guys enjoy watching the

videos that we're putting out.

Yeah, it is criminally fast.

We have some testimony for our true crime
doc from our investigative

team, so I look forward to that.

We just thought we'd
have some fun with it.

Now, on the desktop end for Thalia Prime
and Thalia Mira, our handcrafted desktops,

they've now been upgraded
with a 50/60 TI option.

More systems added to the next generation.
Nice.

We also have a fun interview
for everyone today.

Emma sat down with Vue.
Io.

Let's see how that went.
Here we go.

How are you doing, guys?
Doing great, Emma.

Thank you so much for having us here.

We're excited to be here.
Awesome.

Can you guys introduce yourself?

Yeah, definitely.

My name is Joel Krisner.

I'm a founder at Vue.

I'm the CEO.

I'm also responsible for development of
the core platform and got about 30 years

in tech and just really glad to be here.
So thanks for inviting us.

On the line with me are
two of the other founding team

members, Keith is founder and CRO, CPO.

I'll let him introduce himself and Matt,
our field chief technology officer.

So, Keith and Matt, if you'd like to
say a word or two about yourselves.

Hi, Emma, and thanks again
for having us on board today.

I'm Keith Bartó, as Joel said, co founder
and chief product and Revenue Officer.

Really excited to be working with System
76 and Vue AI with your

customer support chatbot.

Really enjoy working with your team and
looking forward to having

our conversation today.
And this is Matt White, the field CTO.

And I've had a great experience.

I also have an ASTRA, and it
is just an amazing system.

And working with your team
has been just a pleasure.

Great.
Well, let's jump into it.

Can you tell us a little
bit about what exactly Vue.

Io is?

So The biggest challenge that we see
enterprises face is everybody wants

AI, and they want the AI outcomes.

They don't necessarily want to send
internal data, things that are

confidential things that are related to
company intellectual property or company

finances or anything that's regulated.

They don't want to send
that out to ChatGPT.

And it's not that ChatGPT is bad
at managing or securing data.

They're fantastic.

They have a great team.

They're probably better at securing data
than any enterprise would be because they

have to deal with a litany of
different enterprises data.

So every enterprise wants that
ChatGPT-like experience on their own data,

but they want to retain control
of the data while doing that.

So what we have built at Vue is a fully
self-contained on-premises platform that

allows you to get a ChatGPT-like
experience behind the firewall using your

corporate data, your corporate
intellectual property, without having to

send that data out of the
company in order to do so.

So that encompasses everything from the
ingestion of the data, so how the data is

acquired or posited into the system, how
that data is pre-processed, processed,

how it is laid out, how it is persisted in
a variety of different repositories that

are each unique in the
way that they apply to AI.

And then from there, the experience the
operator needs to be able to have in order

to create an AI agent on top of that data.

So there's really two pillars to view.

The first is, how do we get data into the
system and processed and ready

for AI usage very quickly?

And the second pillar is, how do we craft
and create that AI experience and then

deploy that out to the user that
needs to have that AI experience?

So you could think of the first as,
get the data in and get it ready.

And the second is, let's craft the
experience and then go deploy it on the

internet for the customer support team or
on the internet for the users that

need the customer support experience.

Or let's craft and create the HR analyst
experience, the conversational experience

that they need to have to be able to ask
questions of employee data, and then how

that gets deployed behind the internet,
the Rbac boundary for the HR analyst.

So on the internet,
the HR analyst logs in.

It logs in, it checks to make sure
they are actually an HR analyst.

They have the appropriate
rights to chat with this data.

Or maybe we want to allow users to send
SMS messages to this number to have a

conversational experience with our team
about this particular topic, or it

could be Slack, it could be teams.

So one part is, how do we get the
data in and ready for AI usage?

And the other is, how do we get the data
out to the people that

need to be able to use it?
Nice.

And is it browser-based or do you have a
native application available

for Linux or Windows, or Mac?

That is a fantastic question.

So we have both.

The initial experience is obviously
web-based, where when you are in the AI

Assistant Studio we like to call it, where
you have as an operator, you have this

playground where you can pick the model
that you want to use, the knowledge

base that you want to use.

The knowledge base
is essentially all of the documents that

you've ingested, their metadata, their
graph representations, their vectors.

You can fine tune all of the
knobs associated with those.

And by the way, when I say fine
tune, I don't mean model fine-tuning.

This is retrieval and prompt fine-tuning.

You can take that and
then deploy that on web.

And that's been there for
a number of months now.

We've extended that out
in a few different ways.

One is application native integration.

So that's the slack integration, the teams
integration that I mentioned earlier.

But what I think is really exciting, at
least from my perspective, is we're

building native desktop and mobile
applications that can also take full

advantage of that same infrastructure.

And that's the entirety of the API set
that we've created and the data layout

capabilities that we have and so on.

You can actually run that natively on your
Linux, Mac, or Windows operating system,

or iPhone or Android for that matter.

If you're on the go and you need to have a
conversation with a particular knowledge

base on your iPhone or your
Android, you can do that with Vue.

If you're on your laptop and you want to
have a conversation with a knowledge base

that is centrally stored in the Vue
deployment in the data center running on

System 76 or other
systems, you can do that.

If you want to create a local knowledge
base that never feeds back into the

centralized knowledge base, you can
do that with Vue desktop applications.

So the net of it is we have
web, desktop, and mobile.

Cool.

And what hardware requirements
does the desktop one require?

Yeah.
So the desktop does not require a lot.

There's really two different flavors and
use cases, I would say,

for the desktop app.

One is where you want to have local
knowledge bases and a completely

disconnected experience, and you could
have a very lightweight system for that.

The only challenge you're going to run
into is if you're actually

chatting with that data, you're going to
want to have a reasonably powerful

CPU or a GPU that can be used there.

And the other is if you want the full
complement of view capabilities where

you're doing not only local inference, but
also, and this is very much

an enterprise use case.

And enterprises love this approach where
the desktop hardware effectively becomes a

complement to the data center deployment.

And in this type of mode, you obviously
have the ability from the desktop client

to chat with centralized data and
create local knowledge bases and so on.

But the flip side of that is the data
center deployment becomes aware of that

laptop can use it sparingly as a host to
do inference and embeddings generation.

So everybody's heard of this
concept of the AI PC, right?

I think in three years,
everybody's going to have one.

And if you boil it down to what it
really is, it's a GPU on your desktop.

It's a GPU on your laptop
baked right into the Silicon.

And there's really only one killer app for
that today, and that's Copilot,

which is a fantastic app.

Don't get me wrong, I'm not here to say
anything bad about Copilot, But at the end

of the day, by and large, those CPU
and GPU resources are sitting idle.

And by way of view, you've got this very
nice data center deployment for AI data

ingestion, processing AI-powered
experiences, and so on.

The desktop app can connect back to that
to allow you to have those conversational

experiences natively inside of a desktop.

But on the flip side of that, the data
center infrastructure can then distribute

workloads across all of those systems to
help the data center infrastructure scale.

So on the enterprise side, you also have
the ability to scale out the data center

workload with all of the laptops that are
connected back to it or desktops

for that matter.

What role has Linux or open source played
in your guys' creation of the product or

consideration for how to
publish it and everything?

It's been absolutely staggering.

We are an open source first company.

The vast majority of the team
are open source developers.

We absolutely We prefer permissive
licenses because we've all been there.

We've all had to use open source in
products that we've built in the past.

When we see things like copy left
provisions, we have a very...

I'm not going to get graphic, but we have
a very visceral reaction to copy left.

It's just not a good thing.

For me personally, I've been an open
source developer for over a decade, and

the vast majority of what I publish to the
community are the sets of tools and

capabilities that I need to be
effective as an open source developer.

Naturally, there are some things that you
just can't open source because companies

want to monetize them,
and that makes perfect sense.

There are aspects of view that we have not
open source, but we

do have plans to do so.

But the core underlying layers upon which
Vue is built, either A, come from

permissively licensed open source
software, or B, are things that we have

built, that we have released to the open
source community in a

permissively a licensed way.

I think that
it's a double-edged sword, but for me, one

of the edges is a lot
sharper than the other.

I mean, on one hand, you've got the
concern of, Well, we're building out this

intellectual property that somebody could
just take and go

do better than us or package it in a way
that gets better commercial

adoption than we could.

Yeah, that's always possible, but it takes
quite an investment for

somebody to do that.

On the other hand, when you release a
capability to open source, you get a free

audience to test and identify gaps
and identify areas that are problematic

or buggy that need to be shored up.

You naturally start to foster a community
that can help the software mature and

become more robust in a way that you might
not have even been able to have predicted.

There's a number of, number one, open
source libraries that we are

using natively within Vue.

But number two, a lot of capabilities that
we have built for Vue that

we have also open sourced.

And that's something that a number of
us are very passionate about doing.

Awesome.
Well, I know that we have Vue.

Io running on our ASTRA here for some
internal training for our support team.

What hardware are you guys using
from System 76 that you can share?

I'm happy to go on record and claim that
the entirety of our team is

System 76 fan boys and fan girls.

Awesome.

Yeah, there's zero doubt about that.

We've got a number of Thelio Asteris.

We are an Ampere partner.

In fact, we got introduced to System 76 by
way of Ampere and the

AI platform alliance.

We've got a number of Thelio Asteris.

In fact, I'm looking at one with its
beautiful Ferrari red racing stripe here.

I want to say it's something like
128 cores and a terabyte of memory.

It's just a ridiculously powerful system.

I do all of my development testing on
it, and a handful of others inside

the company are doing the exact same.

In fact, I think we have a number of
systems connected by way of tail scale to

host customer proof of
concept deployments.

So these machines are
ridiculously powerful.

And beyond that, I'm doing all of my
personal work and work for the

company, of course, when I'm not...

The workloads that I don't run on the
Thelio Astera when I'm actually doing

development, I'm doing it on a Pro, and
it's a very powerful machine as well.

I think the hardware is impeccable.

There's so much thought
that goes into the design.

There's nothing like it on the market.

I cannot go to the public marketplace
today and find a machine this powerful

that will allow me to do the
things that I need to do.

We are ecstatic about having a partnership
and a relationship with System 76.

Awesome.

Is there anything else
you'd like to add about Vue.

Io that our audience audience
would like to hear about?

Yeah, absolutely.

If maybe I were able to speak directly to
somebody who is in technology and

development at a small enterprise rise.

I think that's probably
where our sweet spot is.

Ai is really an interesting
category for a number of reasons.

In my past life, my most recent
stint, I was running research at a very

large technology company, Dell,
actually one of the largest.

Some of the areas that I was leading
research and included AI and

distributed systems and as a service
architectures and a number of others.

The feedback that we kept getting was AI
is an amazing space, but to be able

to do it right is very difficult.

Number one, you have to have
your data house in order.

If your data house is not in order, you're
effectively going to be feeding this AI

machine garbage, and everybody knows
the old adage, Garbage in, garbage out.

Same thing with system prompts.

It's not just data, data, garbage in,
garbage out, but also system

prompts, garbage in, garbage out.

Prompts, garbage in, garbage out.

There's a lot of nuance to the space.

The number one indicator for success
is having your data house in order.

The biggest problem a lot of enterprises
face is to embark on a venture to test out

whether AI is going to be
viable for your company or not.

It's going to take some number of people
that are either adequately trained and

understand the space or they have
PhDs and they're math wizards.

It's going to take some number of months.

It's going to take a very large dollar
figure and a good amount

of infrastructure.

The vast majority of
those experiments fail.

We like to call them high
school science fair projects.

In reality, In most cases, they're more
like middle school science fair projects.

The vast majority of them fail,
and it all comes down to the data.

We built Vue specifically to make sure
that the data was in a form and

representation ready for AI consumption
with very simple integration.

The beautiful thing about our platform
is it takes 10 to 15 minutes to deploy.

You can load your data in a matter of 2 to
3 minutes, and you can be chatting with

it inside of an hour, maybe even less.

The beauty of that approach is you can
find out very quickly what your ROI for AI

is going to be without spending five
engineers time over a three to six month

period with half a million
to a million dollars.

You can test it very quickly.

The vast majority of the customers that
have tested it have shown that, yes, I can

get measurable ROI out of artificial
intelligence for my customer support use

case, my technical support use case, my
marketing analytics and

spend analytics use cases.

There's use cases for every vertical.

I would just encourage
you to give it a try.

We give away credits
for people that sign up.

You get 50 bucks right off the bat that
allows you to ingest hundreds, if not

thousands of PDF pages, for example.

You could see for yourself, how does How
does your data stack up with a model that

you choose, and can that be enough to give
you the AI-powered experience that you

need for customer support, technical
support, HR analysts,

marketing spend analysis, etc.
Awesome.

Well, thanks so much for sharing all the
information about Vue and your

relationship with System
76 and open source.

It was very good to have
you guys on the show today.

Yeah, Emma, thank you so much.

We were just ecstatic to be here.

I can't reiterate enough, we are
absolutely System76

fan boys and fan girls.

That's never going to change.

We've just been absolutely
wild by System 76.

So thank you.

I think if I had anything else to add, it
would be as we are using Celio Asteras and

getting the performance benefit from the
design the System 76 put together with

Ampere, the power, the cooling, the
ability to integrate GPUs into the

platform, and really appreciate the
overall architecture and design.

We're really looking forward to working
with System 76 customers to

enhance their AI journey and to get that
return on investment they're looking for.

So we encourage everybody to go to view.

Io and get started for free.
Awesome.

All right, Emma, that was a
wonderful interview today.

Thank you all for tuning in to the System
76 transmission log, and we will

be back to log with you next week.

We'll see you in and out.

3, 2, 1.

This has been the System

76 transmission log.

For more inspiration, check out the
website and follow us on social media.

On your descent back to Earth, please keep
your hands and feet inside the

transport beam at all times.

Captain, sign off in transmission..

System76 Transmission Log: Linux Hardware, COSMIC News and Enhancing AI with View.IO
Broadcast by