Linux from Scratch (linuxfromscratch.org)

322 points by udev4096 16 hours ago

114 comments:

by cloudripper 8 hours ago

I gave LFS a go earlier this year. I learned a lot through the process - but I definitely went outside the guardrails. I use NixOS as my daily driver and found myself curious of whether I could complete LFS using a "Nix" approach. I was only a basic Nix user at that time and that choice made a difficult process much more difficult. However, the declarative nature of Nix meant that I had clear notes of every step of my process - and if something didn't work, I could backtrack and troubleshoot to find the root of the cause. The end result is here [0].

My understanding of Linux, of bootstrapping, cross-compilation, and Nix has grown tremendously as a result of the time I took on this project - and I still go back and reference the work from time to time. When I get some time to revisit the Nix-based LFS project, there are quite a few things I would like to clean-up, including setting kernel configs and handling post-build permissions.

Nix-complexities aside, I highly recommend LFS if you like to understand how things work and don't mind a little suffering along the way.

[0]: https://github.com/cloudripper/NixLFS

by hi-v-rocknroll an hour ago

While Nix/NixOS solved RPM dependency hell and side-by-side multiple concurrent version installation issues, it created its own problems.

The problem with Nix/NixOS is everything is too isolated to be configurable or usable, and so almost everything is broken because it doesn't work like any other OS. Chasing down all of the oddities and gotcha problems becomes a full-time IT job that slows everything else down, and then with a fragile special snowflake of patches and workarounds, it becomes very expensive and slow to do anything or to count on it being reliable. Furthermore, the syntax and nature of the packaging DSL is also too clever special snowflake that could've made do with declarative data or imperative procedural with something widespread like Python, Bourne shell, or JS to reduce the friction of customization and contribution.

Like Qubes, an over-optimization for particular goal(s) ends up becoming Achille's footguns because of a lack of compatibility and usability with everything else that came before.

by therein 19 minutes ago

If the DSL was easier to work with, more familiar; and if there was some good IDE support with autocomplete and documentation, I think it would be amazing.

by ocean_moist 2 hours ago

> This results in a significant storage footprint, often exceeding 12GB.

12GB being considered significant makes me feel good, a return to simplicity. The other day I couldn't provision a VM with less than 512GB of storage...

I can't even play my favorite games without giving up >150GBs...

by tomberek 3 hours ago

This is excellent. Have you considered making a presentation or a write-up of the experience?

by cloudripper 3 hours ago

Thanks for the input. A couple of folks suggested that recently as well. Once I can clear up some bandwidth, I do intend to follow through on that. It gave me a huge appreciation for Nix as a build system, and I would love to share that if it were helpful to others.

by ayakang31415 3 hours ago

The first thing that pops out on Google search of NixOS is "Declarative builds and deployments." What exactly is NixOS different from other distros, such as Ubuntu?

by hollerith 3 hours ago

NixOS separates packages. If the package foo contains a file /usr/bin/foo, NixOS installs it in /nix/store/67c25d7ad7b2b64c67c25d7ad7b2b64c-foo/usr/bin/foo. In order to make this separation work, Nix must sometimes rewrite binaries so that all references in the binary to /usr/bin/foo becomes references to /nix/store/67c25d7ad7b2b64c67c25d7ad7b2b64c-foo/usr/bin/foo.

The advantage of this approach is that it gives more control to the distro maintainers and the admins of the computer, taking that control away from the "upstream" maintainers of the software being packaged. For example the software being packaged cannot just call the library bar because bar is not at /usr/lib/bar.so like it is in most Linux distros -- it is at /nix/store/17813e8b97b84e0317813e8b97b84e03-bar/usr/lib/bar.so, but of course the software does not know that unless the person creating the Nix package (the packager) arranges for the software to know it (again sometimes by doing a search-and-replace on binaries).

If the upstream maintainer of foo thinks foo should link to version 6 of library bar, but you think it should link to version 5, NixOS makes it easier for you to arrange for foo to link to version 5 than most distros do (even if version 6 of bar is needed by other packages you have installed which you need to use at the same times as your using foo).

Note that if this separation of packages imposed by NixOS has any beneficial security properties, it is merely security through obscurity because there is nothing preventing a binary from covertly searching through the directory listing of /nix/store/ for the name of the library it wants to call. Nevertheless it turns out the be useful to seize some control away from upstream in this way even if technically upstream could seize the control back if it were willing to complicate the software to do so.

People, including the creator of Nix and NixOS (Dolstra), will tell you that NixOS's main advantage is "declarativeness" (which in the past Dolstra called "purity") or the fact that the compilation / building process is deterministic. I believe both positions (NixOS's advantage is declarativeness and the advantage is deterministic builds) are wrong. Specifically, I believe that although deterministic builds are useful, the separation of packages I described is much more useful to most users and prospective users of NixOS.

Another way to summarize it is that NixOS package maintainers routinely modify the software they are packaging to use less "ambient authority".

by _ank_it 2 hours ago

Extremely noob here, I was trying nixos and got real confused about how to install python packages as pip was not allowed at system level.

by imiric 42 minutes ago

Nix purists would say that you should use flakes to declare all the dependencies for each project, and reference all Python dependencies as Nix packages there. Nix effectively tries to replace every package manager in existence, so all Python, Ruby, Emacs, etc. dependency trees are duplicated in Nix.

I think this is insane. Not only will many packages be missing from Nix, you will also have to wait for the upstream changes to actually propagate to Nix repositories. This all assumes, of course, that there are no packaging issues or incompatibilities in this repackaging.

This is one of the ways that Nix sometimes just gets in your way. I've been using Nix(OS) for several years now, and this still bothers me.

Instead of doing this, For Python specifically I would suggest installing pyenv, which Nix packages. Then enter a nix-shell with a derivation thingie[1,2], and install Python as usual with pyenv. Then you can use any Python version, and with pyenv-virtualenv (which Nix _doesn't_ package...), you can use venvs as you're used to. Sure, you don't get the benefits of the declarative approach and isolation as with "the Nix way", and you may run into other issues, but at least it's a workflow well known to Python devs. Hope it helps!

[1]: https://gist.github.com/imiric/3422258c4df9dacb4128ff94d31e9...

[2]: It took me way longer than I would like to admit to figure this out... This shouldn't be so difficult!

by therein 17 minutes ago

Ironically, that's when you create an Arch container in LXC.

by imiric 12 minutes ago

Meh, maybe. Containers make everything more difficult for local development, though. That would be my last resort in this case.

by j0057 an hour ago

If other distros allow pip-installing into the system, that could be considered a bug or at least an anti-feature, because it's almost always a bad idea: it can clash with distro-managed Python packages, it will break on Python upgrades, and sooner or later you will run into version conflicts (a.k.a. DLL Hell). Recent versions of pip refuse to install into the system by default, for all of these reasons.

It's better to instead pip-install Python packages into virtual environments, recent Pythons havr `venv` built in for this purpose. For user-scoped or system-scoped utilities, `pipx` can manage dedicated virtual environments and symlink them into the search path.

by hebocon 2 hours ago

I'm new as well - it's good to remember that NixOS, nix-shell, and the programming language of Nix are all separate. You can start with your current distro to learn nix-shell first.

I still have no idea how it all works but it seemed prudent for me to at least try.

by dvno42 6 hours ago

Between LFS and Stage 1 and 2 Gentoo installs back in the early 2000s during High School, this gave me a big leg up in my journey of learning about computers and Linux. I can't thank those project maintainers enough for helping me get my footing at such a young age and peaking my interest in computing. I ended up printing out the LFS book in segments on one of the printers in High School and brought it home in pieces to use on the home computer.

by _joel an hour ago

Same, ricing Gentoo taught me a lot about linux and mainly how to fix it. 20+ year career later... I'm still learning how to fix it :)

by tehf0x 5 hours ago

Amen to that! 20 years later this was my gateway drug into being addicted to computers and gave me my full stack understanding I still use every day at work. <3 Gentoo and the friendly geek at the coffee place I worked at when I was 14 who gave me my first hit and held my hand through what is effectively mental torture for most people

by hi-v-rocknroll an hour ago

Yep. Recompile all the things with different USE flags.

by _joel an hour ago

Recompiling the entire openoffice so it loads 1 second quicker.. just using a hacked up distcc cluster and a day's worth of CPU time. Bargain!

by oorza 5 hours ago

> I ended up printing out the LFS book in segments on one of the printers in High School and brought it home in pieces to use on the home computer.

I am actually flabbergasted there's another human being on Earth that has this exact same story as me. I used one of those giant 3 ring legal binders for my printed copy, lmao.

by sieste 12 hours ago

I really like the idea, really tried following the process several times. But each time it just turned into a copy-incomprehensible-commands-into-the-terminal exercise at some point, and I lost motivation. Did anyone experience the same?

by aflukasz 8 hours ago

It feels to me like you have approached this with a wrong mindset, like you focused too much on finishing the whole process.

With LFS you must put substantial amount of value into the journey itself. It's not about copy-pasting commands, it's about trying to actually understand what and, more importantly, why your are doing each step. My recollection is that LFS docs were good in helping with that. Or maybe it was coming from reading docs of the actual components as I went along? I don't remember, probably some mix of both.

I did set LFS system up once, I think it was 2001 or 2002. I specifically remember that it took many days (due to compilation times), was very interesting, and at the end I was putting final touches to my mutt and slrn configs. With a vague memory that I had some TODO for mutt that I didn't finish, still lingering in my mind, apparently! :)

All in all, great and satisfying learning experience.

I would not use such system as a daily driver, though. I think it's good only if you have time and want to learn. I'm curious if someone here is brave enough to have different approach.

by oorza 5 hours ago

LFS to a computer scientist should be like soap to a chemist. Something you can do as a fun, educational experiment, but not where you source a necessary tool for your life.

by vbezhenar 10 hours ago

I built my LFS around 20 years ago. While I followed instructions most of the time, I built some components myself. I remember that I wrote by own init scripts and I think I wrote my own initramfs (not completely sure about it, but I definitely remember tinkering with it and it probably was LFS).

I also wanted to implement an isolated build system and my own package manager (basically tar wrapper with file database to be able to uninstall, no fancy dependency stuff), but containers were not invented back then and my skills were not good enough to properly utilize fakeroot, so it never was implemented, although I spent some time experimenting. Today I would do it with docker, exciting time.

by johnisgood 8 hours ago

Wanting to (and doing it) make my own package manager because of LFS and minimalism in general, good times.

by noufalibrahim 10 hours ago

I haven't tried this but I have used https://github.com/MichielDerhaeg/build-linux during my trainings and all my students quite enjoyed the experience. It basically builds a Kernel, libc, busybox, init etc. and gets the whole thing running inside qemu.

I found it quite educational and worth the little time I spent on it.

by pushupentry1219 11 hours ago

When I was in highschool I had a bunch of free time outside of school days. I used a YouTube guide to install Arch. I failed many many times. But in doing so I almost kind of learnt what each command was doing.

I had the same experience when I installed Gentoo. In both cases at first I was copy/pasting commands, but through failure I ended up understanding a lot of what the commands did.

And what someone else said below is true as well, actually try and force yourself to understand what you're typing. Even briefly just scanning a man page helps a lot.

by bee_rider 9 hours ago

I wonder if the wiki would have been an easier start, than YouTube. YouTube is nice for learning things with a visual component, but installing Linux is all text, all the time. The ability to easily hop around and re-read sections on the wiki where necessary (I mean, jumping around is possible on YouTube of course, but it is really easy on the wiki) seem like it would be a big help.

by pino82 6 hours ago

Reading and actually understanding non-trivial text is hard if you are part of a generation that was never challenged to actually learn it. For those people, YouTube (and a few similar shops) are the default way to consume any content. That's what they do all the time. Sure, they somehow know those legacy emojis that you call the latin alphabet. It will just not lead to a deep understanding of text.

Another aspect is probably that watching YT clips always has a feeling of being part of something. Some movement, some bubble, some society, whatever. They don't install Arch bcs they want to learn sth, or make some use of the OS. They do it _because_ they found it on YT and they want to be part of it. Maybe they even write comments or make a 'reaction' video. Today it's Arch, tomorrow it's a special pizza recipe from that other guy on Insta. It doesn't really matter.

by bee_rider 5 hours ago

I dunno. I’m a millennial so all sorts of stuff was just ascribed to my generation. As a result, I tend to just assume these differences are overstated.

I worked with college students fairly recently. They did often reach reflexively for video. But when the written material was good enough, they used it.

by chgs 5 hours ago

Us millenials are old. My 43rd birthday is coming up fast.

College students are the next generation. Most millenials remember dialup or at most a time before mainstream streaming video. College students today have seen their formative years being constantly on with instant access to more material in any format they want than they could ever grasp the concept of.

by bee_rider 4 hours ago

Yeah, I was just abstracting from the experience of having everything I did attributed to my generation, and then applying that experience to the next generation.

They handle some things a little differently and sometimes reach for different defaults, but in the end, it isn’t like they are coming from some totally alien planet or anything like that.

by pino82 2 hours ago

Very often when I discuss it, someone argues along the lines of "yes, it looks different, and maybe even weird at first glance, but everything is indeed fine, and look how access to all that content even makes them more competent than older generations in many ways".

And I'm always asking myself what is wrong with me that all that completely does not reflect my practical experiences at all.

by bee_rider 14 minutes ago

I dunno, hard to say, if our experiences don’t match maybe we’re just in different environments. No particular reason to assume mine is the ground truth of course.

One possible skew could be: often it is professors who complain about this stuff, but the type of person who goes on to be a professor tends to be pretty clever and surround themselves with clever friends. So if you are a professor or a highly skilled programmer or something, keep in mind that you probably have a rosy picture of the average competence of the past.

by pino82 2 hours ago

We are probably more or less the same age. And yes, they said it about us as well. And they were right. It's a slow downward trend. And it's still continuing. Do you also see these young mothers on the streets which would just ram you with their baby buggy, because they are deeeeply involved into some toktok swiping ceremony? Addicts. What do we expect from that?

> But when the written material was good enough, they used it.

To some extent, yes, as we all do, they will try to make a good show for you when they feel that there is an audience for that show and it might somehow pay off.

by kobalsky 8 hours ago

I grabbed a random page from the manual:

https://www.linuxfromscratch.org/lfs/view/stable-systemd/cha...

Every step is explained and every used parameter is documented.

by ruszki 3 hours ago

I would say "explained". As a layman, who worked with linux previously (and even tried LFS a long time ago), but definitely don't have a deep understanding of linux, and eve my knowledge is not up-to-date:

The sed command is not explained at all. Even the description is non-sense if you don't know already what they are talking about. What is this "default directory", why do you need to set it, why there and only there, why that command works? Even the command itself is something, which I need to check the manual what the heck it does, because it's not a simple one.

> -enable-default-pie and --enable-default-ssp

The description is almost unusable. So we don't need it, but it's "cleaner", which in this context means exactly nothing. So what happens if I left out? Nothing? Then why should I care?

> --disable-multilib

Okay, it doesn't support "something". I have no idea what is multilib, or why I should care. Basically the next arguments' description tells me that because it wouldn't work the compilation otherwise. And then..

> --disable-threads, --disable-libatomic, --disable-libgomp, --disable-libquadmath, --disable-libssp, --disable-libvtv, --disable-libstdcxx

But why would they fail? I want to understand what's happening here, and I need to blindly trust the manual because they just tell me, that "they won't work, believe us".

> --enable-languages=c,c++

Why are these the only languages which we need? What are the other languages?

So at the end, descriptions are not really helping to understand what's happening, if you don't know already. The last time when I started LFS (about 10 years ago), that was my main problem. That you already need to know almost everything to understand what's really happening, and why, or reading manuals, or trying to find basically unsearchable information (like why libatomic compiling would fail at this step). So after a while, I started the blind copy-pasting, because I didn't have the patience of literary months, and when I realized that this was pointless, I gave up.

by gabriel 32 minutes ago

Go deeper. You're right on the cusp of it since all of your questions are fantastic. Even digging into one of your questions will bring up highly relevant material for understanding how a Linux system functions.

    > The sed command is not explained at all. Even the description is non-sense if you don't know already what they are talking about. What is this "default directory", why do you need to set it, why there and only there, why that command works? Even the command itself is something, which I need to check the manual what the heck it does, because it's not a simple one.
By "default directory" they just mean that the upstream GCC source code has a file with default variables and they are modifying those variables with the sed command to use $prefix/lib and $prefix/usr/lib instead of $prefix/lib64 and $prefix/usr/lib64, e.g. lines that contain "m64=" and replacing "lib64" to be "lib". This is what sed is used for: To make string substitutions based on pattern matching. Think through ways of writing the sed command and testing on your own file to see how it behaves. This will lead you to more tools like diff, grep and awk.

    > > -enable-default-pie and --enable-default-ssp
    > The description is almost unusable. So we don't need it, but it's "cleaner", which in this context means exactly nothing. So what happens if I left out? Nothing? Then why should I care?
Go back and re-read the sections up to this point. Make note that you're in Chapter 5, which is the bootstrapping phase for the toolchain cross-compilers. Then look into the features that are mentioned. You can see descriptions by running the ./configure --help most times or looking up the GCC documentation. Those features are for security purposes and if you put that in perspective of the bootstrap phase they aren't needed if the only purpose of the temporary GCC binaries is to compile the final GCC in a later phase. To your point, perform an experiment and enable those features to see if there really is a difference other than time spent to compile. GCC incrementally adds security features like this and they are a big thing in hardened distributions.

    > > --disable-multilib
    > Okay, it doesn't support "something". I have no idea what is multilib, or why I should care. Basically the next arguments' description tells me that because it wouldn't work the compilation otherwise. And then..
A great feature to look up! Check out https://gcc.gnu.org/install/configure.html and search for the option and you'll find that it has to do with supporting a variety of target calling conventions. I can see how that'd be pretty confusing. It has to do with the underlying hardware support for application binary interfaces that GCC can utilize and it turns out you probably only need to support your native hardware (e.g. x86_64). That is, you're compiling your system from scratch and it'll only run on your native hardware (x86_64) but if you were a C/C++ programmer maybe you'd want to have support for other hardware (64-bit ARM systems are pretty common today as an example). So you can save time/space by disabling the defaults and honestly the defaults it includes are just not all that relevant on most systems today.

    > > --disable-threads, --disable-libatomic, --disable-libgomp, --disable-libquadmath, --disable-libssp, --disable-libvtv, --disable-libstdcxx
    > But why would they fail? I want to understand what's happening here, and I need to blindly trust the manual because they just tell me, that "they won't work, believe us".
Try it and find out. I would expect that they would fail due to reliance on other dependencies that may not have been installed or included in this bootstrapped build. Or maybe be/c those components don't behaved well with the LFS-based bootstrap methodology and ultimately aren't needed to bootstrap. Sure trust the LFSers but also think through a way to test your own assertions of the build process and try it out!

    > > --enable-languages=c,c++
    > Why are these the only languages which we need? What are the other languages?
GCC supports many language front-ends. See https://gcc.gnu.org/frontends.html. Only C/C++ is needed be/c you're bootstrapping only C and C++ based package sources. You can validate this as you build the remaining sections. It's conceivable that if you needed other languages in the bootstrap you could include them.

    > So at the end, descriptions are not really helping to understand what's happening, if you don't know already. The last time when I started LFS (about 10 years ago), that was my main problem. That you already need to know almost everything to understand what's really happening, and why, or reading manuals, or trying to find basically unsearchable information (like why libatomic compiling would fail at this step). So after a while, I started the blind copy-pasting, because I didn't have the patience of literary months, and when I realized that this was pointless, I gave up.
It's a steep learning curve, especially of a bootstrap which by its nature is circular! tbh, that's sort of the utility of LFS, it can take you up to a certain point but there are so many options and pitfalls in building a Linux system (or really any complex piece of software) and the real usefulness is pushing through, picking something and learning about it. Then using what you learned to apply to the unknown. GCC is one of the most important packages too, so there's a lot to unpack in understanding anything about it, but the impact is large.
by andrewmcwatters 4 hours ago

You grabbed a random page.

by mnahkies 7 hours ago

Last time I tried (probably well over 10 years ago) I struggled more with how long stuff took to compile. The main learning I remember from it was the chroot concept, which then later helped me grasp containerization easier

by sweeter 14 minutes ago

Yes. I got a little bit out of it, certainly, but it really is mostly an exercise of compiling packages. I was a little disappointed but in the end LFS inspired me to take on a rewrite of the Unix Coreutils in Golang. I learned far more about Unix systems from doing this than anything else. It also improved my skills a lot, I look back on some of this code and cringe a little bit.

[1] https://github.com/sweetbbak/go-moreutils

by pipes 11 hours ago

Yes this and endlessly unzipping things. I gave up after many many hours. I doubt I learned much from it.

by zvmaz 9 hours ago

> Did anyone experience the same?

I tried several times, and tried again just recently. I share your sentiment. It perhaps gave me a renewed appreciation of the huge benefits Linux ditributions and package managers give us.

by kwanbix 11 hours ago

Same for me. I tried it about 20 years ago. I didn't fell like I was learning anything, just copy paste like you say. I haven't retried since, but I think it would be much better if they explain what you are doing and why.

by hi-v-rocknroll an hour ago

Nope. It's a launchpad not meant solely to follow blindly. The point is trial and error experimentation along the way. Can I switch the order of these packages or enable dependency X?

It also helps to have a fast x86_64 box with zillions of cores and plenty of RAM and SSD, and also a compiler cache like sccache to speed up rebuilds.

by keyle 12 hours ago

Yep, basically. At some point you realise you could do it, but do you really want to go through it all... It's a slog.

It is a fun experience though!

by stavros 10 hours ago

> It's a slog. It is a fun experience though!

Aren't those two opposites?

by Brian_K_White 6 hours ago

You've never experienced effort as fun? That is... not flattering.

by stavros 5 hours ago

Effort, yes. Slog, no.

by wezdog1 8 hours ago

Type 2 fun

by exe34 11 hours ago

it's better if you try to work out what the commands and options are doing. I did this a long time ago, and 15 years later, I realise that it gave me a big advantage over my colleagues who haven't done something like that. "missing xxxxx.so" - they don't even know what that means, whereas I'm already trying to find the so and putting it in ld_library_path to see if it helps.

by spockz 10 hours ago

In my experience, whenever a .so was missing it was either due to a missing package on the os package manager level, or a completely broken gcc or llvm setup. I never needed to explicitly add single .so files to a path. (Besides the use case where I specifically wanted to override a malloc implementation.)

In which cases did/do you need to add individual files?

by nmoura 7 hours ago

In my case, when I was a Slackware user before Slackbuilds was created, sometimes I wanted to try out programs for which there was no package. Usually they required a .so file that was not installed and a workaround was to put it manually from a package of another distribution compiled for the same architecture. When the .so file was there, but on another path, a symbolic link was sufficient. The ldd command was a good friend.

Of course that was not the best and cleanest solution, having things outside the package management system bothered, but it was enough to experiment programs and I kept track of the changes so I could have the prior state of things. Later on, the Slackbuilds project eased the work and I contributed by writing code to automate the creation of a few packages. I learned a lot from these issues.

by screcth 8 hours ago

In my case, I work with proprietary EDA tools. Vendors love messing with LD_LIBRARY_PATH. Chaos ensues when they override a system library or two versions require mutually incompatible versions.

I agree with the comment you are replying to. Having broken my home Linux installs too many times has taught me how to diagnose and fix this sort of issues.

by exe34 10 hours ago

in this case it's an in-house monstrosity that's got a lot of various languages and build systems involved, and the moment you update a compiler or build system, everything jumps into a new location for no good reasons. it was just an example though - same issue with .h not found during compilation, or using a different compiler from the system wide one.

by notorandit 8 hours ago

Nope. I never execute commands without understanding its meaning. Maybe you expect to complete a LFS installation in very little time. Which cannot be the case in general and in this very case in particular

by rwalle 10 hours ago

Haven't tried it, but I guess if you were doing this again today, ChatGPT would help a lot.

by cjk 18 minutes ago

This, and its no-longer-maintained cousin, Cross Linux from Scratch[1], were instrumental in learning how to bootstrap a custom Linux distro for embedded devices (especially the CLFS Embedded variant[2]) at a previous job.

[1]: https://clfs.org/

[2]: https://clfs.org/view/clfs-embedded/arm/

by jakogut 41 minutes ago

Another project to check out if you enjoy LFS is buildroot.

Buildroot uses Kconfig (and associated tools like menuconfig, config, etc.) same as the kernel, to generate a configuration for building an embedded Linux system. This configuration can be committed and patched in the repo for your project. Adding an application or library automatically pulls in dependencies. Configs can be merged with fragments that add specific features.

Buildroot is capable of building its own cross compilation tool chain for many architectures, and enables developers to choose their own libc, and other toolchain configurations. It can also be configured to pull a prebuilt toolchain tarball to save time.

Packages are written in GNU Make, and macros are available for pulling code from GitHub and building packages using autoconf, cmake, meson, golang, Rust, python, and many more.

It works fantastically for anything from building an embedded Linux distro for something like digital signage, to a self-contained network bootable application for large scale automated provisioning of devices, to building minimal tarballs of filesystems to run as containers.

by ocean_moist 3 hours ago

I did LFS after I started dailying gentoo with a custom kernel (~2017), it was actually fairly easy, if not time consuming. People on IRC are extremely helpful. These days I write code more than infra, but I still think it was valuable.

That combined with my general interest in Linux probably saved me thousands in cloud costs. Linux is a dying art among young software engineers, but alive among young nerds. Even among infra/devops people, maybe if they learned more about linux they wouldn't reach for k8s to solve the simplest problems.

by Scuds 19 minutes ago

I wonder what an equivalent "BSD from scratch" is like? Linux was assembled from a collection of parts while BSD is (reputably) a lot more 'designed from the ground up' Even a modern system like Fuchsia - what's that like to build from the ground up?

Or is it "You fool! Building your own kitbashed Gundam of an OS is the point."

by hi-v-rocknroll an hour ago

I wrote an (unreleased) Bash-based framework for Docker container- and exo-Docker VM-based LFS builds.

Dir listing: https://pastebin.com/JrPkftgr

Dockerfile (it's still missing throwing away intermediate layers b/c it's optimized for caching and dev speed):

    FROM quay.io/centos/centos:stream9

    COPY files/bashrc /root/.bashrc
    COPY files/bash_profile /root/.bash_profile
    
    COPY --chmod=755 files/sysdiff /usr/bin/sysdiff
    
    WORKDIR /mnt/lfs
    
    COPY files/functions.inc files/
    COPY --chmod=755 files/run files/
    
    COPY scripts/0000-install-deps scripts/
    RUN files/run install 0000-install-deps
    
    # Docker host runs a memcache instance to cache build artifacts in RAM
    COPY scripts/0010-install-sccache scripts/
    COPY files/sccache-wrapper.c files/
    RUN files/run install 0010-install-sccache
    
    COPY scripts/1000-setup scripts/
    COPY files/hosts files/passwd files/group files/shells files/inputrc files/fstab.in files/locale.conf.in files/
    RUN files/run install 1000-setup
    
    ...
by Manfred 12 hours ago

You could argue that running through the woods doesn't teach you about trees, but it's really effective when you bring a book along.

Linux From Scratch is a fun way to explore what parts make up a Linux distribution. I did this a few time long ago before switching to Gentoo and it really helped with appreciating the freedom to pick and choose details of your operating system without actually writing it from scratch.

by harha_ 12 hours ago

What's the fun in this? I once looked into it briefly and it's what I guessed: mostly about building and installing the required software individually. That's boring, I guess the fun would be in building an actual usable distro "from scratch".

by thom 11 hours ago

The first run isn't much fun. There are useful skills in just knowing how to build stuff, and knowing all the constituent parts of a running system. But overall I suspect you'd learn more in a year of using Arch.

But on subsequent play-throughs, you get to be creative! Want to ignore the FHS and try putting your apps somewhere weird? Want to use a different libc or init system? Build everything around an esoteric shell? Go for maximum optimisation? It's the replayability that makes Linux from Scratch fun, and yes, totally worth working out what your own distro might look like.

by coatmatter 8 hours ago

I think it depends on one's true goals and the way it's approached. Compare with http://www.greenfly.org/mes.html (which affects many Arch users too, I feel)

by akdev1l 9 hours ago

You would start with LFS to build a base and then add your own package manager to make a “true” distro.

If you want to just build an existing distro from scratch then they all have ways of doing that as they need to rebuild packages regularly. Eg: It’s a lot simpler to build a fedora image from scratch by using Koji/OSBuild vs LFS as Koji/OSBuild basically automate the whole LFS process.

Additionally in order to have fun with LFS you don’t really need to do the whole book. You can make your own little Linux “distro” from scratch with the kernel and statically linked busybox. Busybox comes with all the minimal utilities so you can pick and choose what you want to write and what you want to reuse.

by bee_rider 8 hours ago

I wonder if some sort of “BSD from scratch” would be a more fruitful exercise. Since they include more of the overall system, you’d end up with something approximating a regular BSD install pretty well at the end.

by machinestops 2 hours ago

A "BSD from scratch" project would end up closer to something along the lines of "Debian from scratch" or "Redhat from scratch". You would be performing an exercise in bootstrapping, and little else.

by skotobaza 7 hours ago

Learning how things work under the hood is fun, and you should do this if your job revolves around Linux.

by mbivert 11 hours ago

You can think of a LFS as training before actually tackling an original distribution: distributions are complex, even following the steps of a LFS won't necessarily yield a working system (lots of room for mistakes).

Or, as a base to build a distribution: you can automatize some of the process, slap a pkgsrc on top, etc.

For budding programmers, it's a great exercise: discover software, appreciate dependencies, understand components of a Linux distribution, learn how to build things from source, configure a kernel, etc.

by immibis 10 hours ago

Building an actually usable distro means doing this, but every day for every package that updated, and scripting it for other people to use.

by exe34 11 hours ago

> building an actual usable distro "from scratch".

could you say a few words on how this would avoid building and installing the required software individually? are you thinking of osdev instead?

by creatonez 9 hours ago

I assume they were alluding to package managers, automation, and release engineering, which are somewhat covered in the LFS Hints Project (https://www.linuxfromscratch.org/hints/)

by f0e4c2f7 4 hours ago

Building using LFS has been on my list of "I really should probably do that just to learn" for about 20 years now. I'll get around to it! This year I'm finally learning Lisp (and really enjoying it).

by theanonymousone 4 hours ago

Isn't an LLM making such things more approachable?

by f0e4c2f7 4 hours ago

For sure. It's been amazing honestly and I feel like it has really accelerated my learning. Enough that I'm trying to get really serious about making the most out of this new leverage seemingly out of nowhere.

I use it a lot for mapping out of the initial concepts but I find one of the best use cases is after understanding the basics, explaining where I need to learn more and asking for a book recommendation. The quality of my reading list has gone up 10x this way and I find myself working through multiple books a week.

Great for code too obviously, though still feels like early days there to me.

by ngneer 4 hours ago

This brings back memories. I built LFS on a Pentium based machine a few times and it was quite fun. I took a similar approach to building a custom bare metal system for a client that needed extra security. I sometimes wish that there was a well maintained bare bones distribution inspired by the LFS philosophy, that might help with managing complexity on the cloud. Scanning the thread, NixOS came up. I used to love Slack, too. Any other recommendations? Something with minimal dependencies yet capable of being a LAMP-like server?

by greggyb 26 minutes ago

Alpine Linux is a very minimal base built around busybox and musl libc.

Void is another minimalist distro offering a choice of glibc or musl, and using runit as an init.

If you want more choices of init and a bleeding edge rolling release, Artix is a derivative of Arch with multiple init options in the core distribution.

Arch and Gentoo are the classics of do-it-yourself minimalism.

by throwaway2037 2 hours ago

Alpine isn't good enough as a base system?

by _joel an hour ago

muslc isn't as fast as glibc (last time I checked), but I guess it's fine if you're ok with that.

by fefe23 6 hours ago

LFS has been a tremendous resource over the years for me. I'm happy it exists and that people put in the time to keep it going.

I often wonder why the existence of long "you also need to do this completely unintuitive thing first" documentation on the open internet isn't shaming more projects into reducing barriers to build their software.

by krylon 3 hours ago

Back in the day, I made an attempt at LFS but gave up halfway through. In my defense, I had done a gentoo stage 1 emerge earlier that year. ;-)

I never intended to end up with a usable system, I was just in it for the learning experience. And I did end up learning a lot.

by lrvick 12 hours ago

For those that want to see full source bootstrapped, deterministic, and container native LFS check out https://codeberg.org/stagex

by devilkin 12 hours ago

I did this, once, on a 386. Did I learn things? Definitely. Will I ever do it again? Definitely not ;)

by tonyarkles 8 hours ago

Heh, I did this back in about 2000 or 2001 with the intent of taking an old 486 I had mounted in a relatively flat case to fit under the seat of my car and turning it into an MP3 player. The process was a lot of fun, I learned a ton, and then... I discovered that I didn't realize I'd done the entire build targeting a Pentium CPU and all of the binaries contained instructions that the 486 couldn't run.

I did not repeat the process :)

by yonatan8070 11 hours ago

I wonder, if you were to script all the commands you ran back in the day, and ran that same script on your old 386 and on a modern system with a top-of-the-line AMD Epyc or Intel Xeon, how much faster would it run?

by llm_trw 10 hours ago

There is a unit of compilation as part of the LSF book which lets you estimate the who build process. You only need to compile libc or some such.

by fourfour3 11 hours ago

Especially with the increase in storage performance - going from a hard disk that might have even still been using PIO modes to modern NVMe would be gigantic

by llm_trw 10 hours ago

The kernel is rather larger today: https://stopbyte.com/t/how-many-lines-of-code-linux-has/455/...

The same is true for all other pieces of software.

Build time will always increase until no one can be bothered to build it any more.

by fourfour3 11 minutes ago

I think you've missed the point:

> I wonder, if you were to script all the commands you ran back in the day, and ran that same script on your old 386 and on a modern system with a top-of-the-line AMD Epyc or Intel Xeon, how much faster would it run?

Implies you're compiling the 386 era versions of Linux - so the fact modern Linux is larger is immaterial.

by kopirgan 7 hours ago

This has been around for ages. I only read the contents briefly to know what the minimum components of Linux is, that too years ago. When you install any distro (even Debian) so much gets installed which newbies wouldn't even know exists or use.

by rkagerer 9 hours ago

How up to date is this?

When trying to learn by copy-pasting from the internet, I sometimes get tripped up over older/newer Linuxes using different versions of commands or best practices. (e.g. ipconfig vs. ifconfig, iptables vs. successors)

by creatonez 8 hours ago

It was last updated in September 2024. Distrowatch's package list can give you an idea of what version numbers to expect: https://distrowatch.com/table.php?distribution=lfs

Once you get to the BLFS part, you have more choice on whether you're going to innovate or use something that old and in maintenance mode. For example, it has instructions for both Pulseaudio and Pipewire, Xorg and Wayland, SysVinit and systemd. The instructions will rarely be outright incorrect on modern distros unless something is very strange about your environment, since a bulk of the work is done after establishing a more predictable build environment.

by throwaway2016a 6 hours ago

This brings back memories. I tried to do his (by using this exact site!) in college. Which, for reference, was 20ish years ago.

I wasn't able to get very far, mostly because I kept running into obscure compiler and linker errors and 18 year old me wasn't clear how to fix them. Would be fun to see how much is changed since then because it does appear to be at least partially maintained.

by mydriasis 9 hours ago

If you're curious, you should try it. It's a great way to learn about all of the little bits and pieces that go into a working OS that you can use, and by the time you're done you'll be more comfy thinking about Linux in general. It may take some time depending on how often you come back to it, but it's definitely worth it.

by sirodoht 10 hours ago

I went through this over a couple of weeks last year. Very fun, I would recommend it to anyone interested to see the complexity of making a linux distro.

by squarefoot 11 hours ago
by stingraycharles 10 hours ago

This basically allows you to make an OS that’s effectively a single application, right?

by ahepp 8 hours ago

It will spit out a rootfs, or even a block image for a disk, which might make it _look_ like a single application. You will probably update your system by flashing that entire rootfs/image to your target device as if it was one application.

However, it is still Linux under the hood, so there is a kernel + whatever other applications you want running on there, all as separate processes.

You may also be thinking of busybox, which bakes an implementation of most of gnu coreutils into one file to save space. Many symlinks are often created to that sesame file, and when invoked the process can check what name it was invoked with to determine its behavior.

by bubblesnort 10 hours ago

No. It creates a root filesystem that follows your choices of what should be included and tries to be as minimal and small as possible by default. Very suitable for embedded devices and lightweight virtual machines. You can have it build a kernel as well. I use the ability to create a cpio archive for an initramfs on the Linux boxes.

by joezydeco 5 hours ago

I use buildroot to make a commercial product.

It builds the bootloader, the kernel, initramfs and rootfs, and then my application(s) all installed in the right place and connected to systemd. The output is a set of images that burn into their respective partitions on eMMC.

If you're a masochist, there's also Yocto. It'll build your compiler and crosstools as well as the entire system image.

by ahepp 8 hours ago

I reckon you are already aware of this if you’re using it to generate an initramfs, but for those reading along, you can also use it as a docker image

by andrewmcwatters 4 hours ago

I remember trying to automate non-distribution builds of a functional GNU/Linux operating system and trying to not read Linux from Scratch but just official kernel.org documentation.[1]

Unfortunately, the state of Linux documentation is so poor, you can't do it. You need to reference a number of third-party articles that kernel.org itself sometimes links you to.

I believe kernel.org might also mention Linux from Scratch, but LFS does a very poor job of explaining why you need particular dependencies straight out of initramfs.

You need a functional shell, and you need to get your installation on to an actual drive. Neither of those things are explained in sufficient detail with the current documentation available today.

LFS at best says "throw in these ingredients," and leaves you with no other information. You can probably read man pages and piece this stuff together, but it requires that you, at the very least, hardcode where to install files with fsck/mount/cp, I think.

Edit: LFS also does a really poor job of explaining why it chooses its "Basic System Software," much of which isn't actually required for a common GNU/Linux system.

[1]: https://github.com/andrewmcwatters/linux-workflow

by machinestops 2 hours ago

> why you need particular dependencies straight out of initramfs.

It came as a shock to me to learn that initramfs is mostly optional. It can be skipped, and you can boot straight into userland.

by andrewmcwatters 27 minutes ago

Yes, but if I understand correctly, you’d need to configure for this specifically. I think standard configurations and installations use them.

by rbattula 4 hours ago

tried lfs back in high school really fun experience and teaches you a lot

by Razengan 2 hours ago

Anyone else annoyed by “online books” sites like these where you have to play a convoluted game find-the-breadcrumbs from the website landing page to the beginning of the actual book content..

by greggyb 20 minutes ago

Two clicks from the landing page.

1. Choose the book you want to read (they call them sub-projects)

2. "Download" or "Read online"

Of course, they could reverse the nav order and provide first a "read online"/"download" and then let you pick which of the books you want to read, yielding ... two clicks to get to the one you want to read.

by smitty1e 8 hours ago

I have gone all the way through this, #21584. What a tremendous resource.

by dark-star 8 hours ago

I have used CLFS (Cross-Linux From Scratch) in the past a couple of times to build a Linux system for Sun UltraSPARC and SGI Octane systems.

The things I learned, especially about cross-compiling, were invaluable, but you do have to invest some time understanding everything (even reading patches that you have to apply on top of build tools, for example) to get the most out of it. If you just copy/paste commands, well then there are faster/easier ways to get Linux up and running

Edit: I just noted that CLFS has been dead for a couple of years now, which is sad. I would have loved to try an updated CLFS

by sirsinsalot 11 hours ago

Ah yes, I remember the year I lost to this and the pain of cross compilation. Reader beware.

by astrobe_ 8 hours ago

I've crossed-compiled things from time-to-time along the years, and it seems to me that support has improved a bit. Some time ago it was "cross..wut?" but now it is fairly commonly supported and tested. I think it's thanks to Raspi and other popular ARM-based things.

by jaystraw 10 hours ago

to anyone confused why others find this fun or worthwhile, i will quote brak's dad: "you just don't get it, das all"

Data from: Hacker News, provided by Hacker News (unofficial) API