Discussion:
[crosstool-NG] Design discussion
(too old to reply)
Yann E. MORIN
2009-04-04 18:14:20 UTC
Permalink
Hello all!

Recently, I've been challenged about the design of crosstool-NG.

This post is to present the overall design of crosstool-NG, how and why
I came up with it, and to eventually serve as a base for an open discussion
on the matter.

Hopefully some will jump in to offer their views on the subject, and offer
sugestions as what should be done to improve the situation, should the need
arise.

The mail is structured that way:
1) Genesis of crosstool-NG
2) The way to fullfill the requirements
2.a) Ease maintenance
2.b) Ease configuration of the toolchain
2.c) Support newer versions of components
2.d) Add new features
2.e) Add alternatives where available
3) crosstool-NG installation
3.a) Setting up crosstool-NG: why using a ./configure?
3.b) Installing crosstool-NG: why is it required?
3.c) Runing crosstool-NG: why can't I run make menuconfig?
4) crosstool-NG internals
4.a) Programming languages used in crosstool-NG
4.b) Internal API
5) Conclusion

In advance, I do apology for the really, really long post, and for the
limited subset of the English language I use.


=======================================

1) Genesis of crosstool-NG

First, a little introduction to put things straight.

About four years ago, I needed to generate cross-compilers for ARM and
MIPS. One of the requirements was to be able to use various versions of
the components (gcc, glibc, binutils...), and a second was to be able to
switch between glibc and uClibc. Of the different tools I tested, crosstool
was the one most closely matching the requirements, so I ended up using
that for the following 1.5 years.

But crosstool was not easy to configure, and the available versions of the
components were most of the time lagging behind. It was glibc-centric, and
I had to add uClibc support, which was not accepted mainstream.

In the end, maintaining my own tree became problematic, and I decided to
give a try at enhancing crosstool with the following main goals in mind,
in this approximative order of importance:

a- ease overall maintenance
b- ease configuration of the toolchain
c- support newer versions of components
d- add new features
e- add alternatives where it was available

I mostly saw my changes as a experimental branch of crosstool, which
would ultimately pick interesting features as they mature, while dumping
the uninteresting ones. So crosstool would be the stable branch, while
my work would serve as a kind of testbed. Hence the name: crosstool-NG,
"NG" for "Next Generation".

Never, at any one time, did I intend this stuff to replace crosstool.
What happened is that, around the time I was working on this, Dan KEGEL
became less and less responsive, and changes sent to the list (by any
one, not just me) took ages to get applied, if they even get applied at
all.

So that was how crosstool-NG was born to the world...


=======================================

2) The way to fullfill the requirements

The first move I made was to first start from scratch. That way, it
sounded to me it would be easier to come up with a good layout of
things.

2.a) Ease maintenance

At the heart of crosstool was a single script. In there was all the build
procedures for all the components, from the installing the kernel headers
up to building gdb.

The first step was to split up this script into smaller ones, each
dedicated to building a single component. This way, I hoped that it would
be easier to maintain each build procedure on its own.


2.b) Ease configuration of the toolchain

In the state, configuring crosstool required editing a file containing
shell variables assignements. There was no proper documentation at what
variables were used, and no clear explanations about each variables
meaning.

The need for a proper way to configure a toolchain arose, and I quite
instinctively turned to the configuration scheme used by the Linux
kernel. This kconfig language is easy to write. The frontends that
then present the resulting menuconfig have limitations in some corner
cases, but they are maintained by the kernel folks.

Again, of with the build scripts, above, I decided to split each components
configuration into separate files, with an almost 1-to-1 mapping.

Of course, there are configuration sections that do not apply to a
specific component, but to the overall toolchain: the place to install
it, the target and its options (BE/LE, CPU variants...). And some options
tell crosstool-NG how to behave: the place to find source tarballs, log
verbosity, and so on...


2.c) Support newer versions of components

Adding support for newer versions merely required building an appropriate
patchset for that version. As I didn't have time to dig in every individual
mailing lists, I shamelessly vampirised patches from different sources:

- the original crosstool patchset
- the Gentoo patchset
- maybe even the LFS and CLFS patchests
- the buildroot patchset (later, when integrating uClibc)
- others, floating around, or made up by myself

Of course I kept appropriate attribution (or at least I tried to, I may
have forgotten some, apologies) on every patches I so vampirised.


2.d) Add new features

Ah! Big sentence that the one reading "new feature"!

So, by new features, I mean, for example, new configuration options for:
- NPTL-enabled glibc
- using Linux exported headers
- ARM EABI
- ...

That implied adding aconfig kno in a config file, and updating the code in
the corresponding build script, to set the appropriate ./configure option(s),
to create the appropriate config file...


2.e) Add alternatives where available

The most obvious example of an alternative component is the C library to
use: glibc vs uClibc. Then came eglibc.

It was almost as simple as adding a new config file, a new build script
and a choice in the menuconfig. Sometimes, it required tweaking other
components so they recognised the new alternative; gcc was one that needed
a few tricks and patches to recognise uClibc. But all in all, it went
quite smoothly.


=======================================

3) crosstool-NG installation

3.a) Setting up crosstool-NG: why using a ./configure?

crosstool-NG itself requires a few tools to be present so it can run.
Usualy, packages do that via a script called ./configure. So it goes
for crosstool-NG.

crosstool-NG, for its own usage, requires the following:
- grep that accepts -E
- sed that accepts -i and -e
- bash-3 (or bash-4)
- cut, xargs, install, awk
- gcc, to build the kconfig parsers
- curl (or wget), tar, gzip, bzip2, patch, to fetch, extract and patch
the tarballs
- make

Alongside those tools, there are a few that are absolutely required to build
the components:
- bison
- flex, bison
- makeinfo (yeah, we migh not want the doc, but some components build in
inconditionally)
- automake
- libtool

crosstool-NG does not use them, but better check them now, than fail later.

Now, once crosstool-NG has been configured, it's time to "build" it.
Merely, this is a matter of putting the right paths in a file that the
build scripts do include, and to replace the #!@@bash@@ occurences with
the bash we found (or were told to use) in a few scripts.


3.b) Installing crosstool-NG: why is it required?

OK, now why would it be needed to _install_ crosstool-NG? Why can't it
simply run from the source dir?

The first answer is: yes, you can! Just pass --local to ./configure and
crosstool-NG will be able to run from its source directory. But then, why
on earth is it needed to run make install at all? Well, the install step
sets the execute bit on the frontend, ct-ng, and I find it more convenient,
in the point of view of the prgorammer I am, to have a single code path in
the Makefile. That maybe is not what you would expect, but that's the way
I did it. I agree that it can seem ugly, but that's a minor issue, IMHO.

Now, why is crosstool-NG intended to be installed at all?

Think of crosstool-NG as a package like any other: you configure it, you
install it, then you use it. For example, would it make sense that you
build your own program in the gcc source directory? I don't think you
do that, but that you really install gcc in some place, add that to the
PATH, and then cd into your programs source dir and run gcc from there.

To parallel between crosstool-NG to gcc:
gcc: crosstool-NG:
vi foo.c ct-ng menuconfig
gcc -o foo foo.c ct-ng build
./foo target-tuple-gcc

Of course, this is overly simplistic, but you get the idea...


3.c) Runing crosstool-NG: why can't I run make menuconfig?

As said below, the frontend, ct-ng, is written as a Makefile script. That
does *not* mean that is is a _Makefile_: it is an executable that happens
to be written in the same language that Makefiles are.

Now, what if it were written in Python? Would you expect to run it with
"python ct-ng", or would you simply expect to run it with "ct-ng"? The
same applies if it were written as shell script: you don't call shell
scripts with "sh my_script", do you?. And worse, what if it were written
in C ? Surely you'd expect to run it with "ct-ng", no?

Well, the fact that ct-ng is written in the Makefile language might not be
the wisest decision I made on crosstool-NG, but that does *not* change the
fact that you don't want to run make (or any other interpreter) manually
to run it, right?


=======================================

4) crosstool-NG internals

In this section, we'll discuss the internals of crosstool-NG. Most notably,
we'll speak about the API that acts as a frontier between the various parts
of crosstool-NG: the config entries, the main script, the specific build
scripts.

Note: although probably the most important section, it is also the shortest.
I can't and don't want to simply quote what is already available in the
docs/ directory in the sources.

4.a) Programming languages used in crosstool-NG

The frontend, called ct-ng out of laziness, is written as a GNU Makefile
script. Yes. It's a Makefile that can get run. It uses the sha-bang
#!/where/ever/is/make -rf

The core of crosstool-NG is written as a set of bash-3 shell scripts. I do
insist on bash-3, because I use bashisms, and thus the scripts are *not*
POSIX compliant. So, they do not use the traditional sha-bang #!/bin/sh,
but they do explictly use the sha-bang #!/where/ever/is/bash

The config files are written with the kconfig syntax. The parsers were
initially copied and updated from the one in toybox, then updated with the
one from the Linux kernel 2.6.28. It has had a few changes:
- the CONFIG_ prefix has been replaced with CT_
- a leading | in prompts is skipped, and subsequent leading spaces are
not trimmed
- otherwise-leading spaces in prompts are silently trimmed

4.b) Internal API

That API is 2-fold:
- variables set by the main script, and available to build scripts
- entry-points that build scripts must provide, and called by the main
script

The file in docs/overview.txt tries to document this API. It is almost
complete wrt the "arch" and "kernel" API. Others are inexistant. This is
due to a lack of time to properly document the API, because it has been
much fluctuating until not so long ago.

In this respect, I did not do much better than the original crosstool. :-(

I won't discuss it here, you are encouraged to read that file for more
on the subject.


=======================================

5) Conclusion

So far, I hope that I've explained enough so people can jump in and comment.
All critiscism, whether positive or negative, is very welcome. I don't claim
to be either omniscient or omnipotent. I will gladly look at your coments!
Of course, should you find a point on which you disagree, I'd appreciate an
alternate solution to the problem.

Again, I do apologise for the long mail. But that seemed to be really
needed.

Regards,
Yann E. MORIN.

PS. My partner is currently badly sick, and I need to take care of her.
In advance, sorry for the delay that situation may impose to my answers.
YEM.
--
.-----------------.--------------------.------------------.--------------------.
| Yann E. MORIN | Real-Time Embedded | /"\ ASCII RIBBON | Erics' conspiracy: |
| +0/33 662376056 | Software Designer | \ / CAMPAIGN | ___ |
| --==< ^_^ >==-- `------------.-------: X AGAINST | \e/ There is no |
| http://ymorin.is-a-geek.org/ | _/*\_ | / \ HTML MAIL | v conspiracy. |
`------------------------------^-------^------------------^--------------------'


--
For unsubscribe information see http://sourceware.org/lists.html#faq
Rob Landley
2009-04-05 12:15:56 UTC
Permalink
Post by Yann E. MORIN
Hello all!
Recently, I've been challenged about the design of crosstool-NG.
I.E. I blogged a couple criticisms and he read my blog:

http://landley.net/notes-2008.html#07-03-2009

(You'll notice that today's entry is a similar criticism of gcc, and at the
end of the January 7th entry I call my own code "stupid". I break
everything.)
Post by Yann E. MORIN
This post is to present the overall design of crosstool-NG, how and why
I came up with it, and to eventually serve as a base for an open discussion
on the matter.
For comparison, my Firmware Linux project also makes cross compilers, so I
have some experience here too. (I spent most of saturday getting armv4 big
endian soft float to work, and still haven't managed to get uClibc++ to build
under any arm EABI variant; the error is "multiple personality directive"
from the assembler, which is just _weird_.)

My project is carefully designed in layers so you don't have to use the cross
compilers I build. It should be easy to use crosstool-ng output to build the
root filesystems and system images that the later scripts produce. (How easy
it actually is, and whether there's any benefit in doing so, is something I
haven't really looked into yet.) The point is the two projects are not
actually directly competing, or at least I don't think they are.
Post by Yann E. MORIN
Hopefully some will jump in to offer their views on the subject, and offer
sugestions as what should be done to improve the situation, should the need
arise.
1) Genesis of crosstool-NG
2) The way to fullfill the requirements
2.a) Ease maintenance
2.b) Ease configuration of the toolchain
2.c) Support newer versions of components
2.d) Add new features
2.e) Add alternatives where available
3) crosstool-NG installation
3.a) Setting up crosstool-NG: why using a ./configure?
3.b) Installing crosstool-NG: why is it required?
3.c) Runing crosstool-NG: why can't I run make menuconfig?
4) crosstool-NG internals
4.a) Programming languages used in crosstool-NG
4.b) Internal API
5) Conclusion
In advance, I do apology for the really, really long post, and for the
limited subset of the English language I use.
=======================================
1) Genesis of crosstool-NG
First, a little introduction to put things straight.
About four years ago, I needed to generate cross-compilers for ARM and
MIPS. One of the requirements was to be able to use various versions of
the components (gcc, glibc, binutils...), and a second was to be able to
switch between glibc and uClibc. Of the different tools I tested, crosstool
was the one most closely matching the requirements, so I ended up using
that for the following 1.5 years.
But crosstool was not easy to configure, and the available versions of the
components were most of the time lagging behind. It was glibc-centric, and
I had to add uClibc support, which was not accepted mainstream.
I came at it from a different background. I was playing with Linux From
Scratch almost from the beginning, and did several glibc+coreutils based
systems (actually predating coreutils, it was three separate packages when I
started out).

I threw my first build system away in 2003, both because I left an employer
that might have had some claim to work I'd done on company time and ecause I
got serious about uClibc and busybox. That's what got me into cross
compiling: I was trying to build an x86-uClibc system from an x86-glibc
system, and it turns out that's cross compiling. (I didn't know this at the
time, I just knew it was fiddly and difficult.)

I started by taking the old uClibc build wrapper apart to see what it was
actually _doing_, and reproducing that by hand:

http://lists.busybox.net/pipermail/uclibc/2003-August/027652.html
http://lists.busybox.net/pipermail/uclibc/2003-September/027714.html

Of course this was so long ago, I still cared about Erik's new "buildroot"
thing:

http://lists.busybox.net/pipermail/uclibc/2003-August/027531.html
http://lists.busybox.net/pipermail/uclibc/2003-August/027542.html
http://lists.busybox.net/pipermail/uclibc/2003-August/027559.html

Anyway, the 2003 relaunch resulted in the _previous_ FWL incarnation,
memorialized here:

http://landley.net/code/firmware/old/

Which was thrown away and rebooted from scratch in 2006 based on proper cross
compiling when cross linux from scratch came out.

I looked at crosstool circa 2004-ish, but was turned off by the way it
replicated huge amounts of infrastructure for every single dot release of
every component. (I remember it having separate patches, separate build
scripts, and so on. I don't even remember what it did per-target.)

I wanted something generic. A single set of source code, a single set of
build scripts, with all the variations between them kept as small, simple,
and contained as possible.

I actually ended up not basing anything off of crosstool. Instead when Cross
Linux From Scratch came out, I learned from that, and by asking a lot of
questions of coworkers at an embedded company I worked at for a year
(Timesys). But this was 2006, so it was after you'd already started with
this.

Along the way I wrote this:
http://landley.net/writing/docs/cross-compiling.html

Which Timesys's marketing department wound up influencing a bit. If I was
writing it today it would be titled "why cross compiling sucks" and would be
a lot longer...

Anyway, the project I'm working on now is either the third or the fourth build
system I've done, depending on how you want to count it. The last two have
been designed around _removing_ stuff rather than adding it. Figuring out
what I could do without, and how to get away with it.
Post by Yann E. MORIN
In the end, maintaining my own tree became problematic, and I decided to
give a try at enhancing crosstool with the following main goals in mind,
a- ease overall maintenance
b- ease configuration of the toolchain
c- support newer versions of components
d- add new features
e- add alternatives where it was available
Can't really argue with those goals, although mostly because they're a bit
vague.

My current build system has very careful boundaries. I know what it _doesn't_
do. This is not only because my first couple systems grew out of control
(adding more and more packages and more and more features), but because I
watched Erik's buildroot explode from a test harness for uClibc into an
accidental Linux distribution.

Buildroot started when the uClibc guys decided that the build wrapper couldn't
work (because libgcc_s.so would always leak a reference to libc.so.6 unless
you rebuilt the compiler from source). So they abandoned the wrapper and
instead made a simple build script to create a uClibc-targeted compiler from
gcc and binutils. Then because it was easy to do and a good test of the
compiler they'd just built, they compiled BusyBox and made a tiny root
filesystem out of that, packaged up the resulting directory as a filesystem
image, and built User Mode Linux to run the result. Thus buildroot was a
combination compiler generator and test harness for uClibc and BusyBox.

Except that every time a new package was made to work with uClibc (often
requiring a patch or two, or special configuration), they added the ability
to build it to the buildroot scripts, both to document how and to make
regression testing easy. It quickly blew up to dozens of packages, and
buildroot discussion took over the uClibc mailing list for a few years
(eventually I got fed up with it and created a buildroot list on the server,
and kicked the buildroot discussion off to that list. Then the uClibc list
was almost dead for a while until the development community recovered.) It
also sucked all Erik's attention away from busybox (which is why I took the
latter over for a while).

This is why my current system is very carefully delineated. I know exactly
what it does NOT do. It builds the smallest possible system capable of
rebuilding itself under itself. I.E. it bootstraps a generic development
environment for a target, within which you can build natively. It has to do
some cross compiling to do this, but once it's done you can _stop_ cross
compiling, and instead fire up qemu and build natively within that.

Reality is of course slightly more complicated, but I edit down towards that
vision fairly ruthlessly. My project will _not_ become a Linux distro,
although you can build one on top of it if you like (ala Mark's Gentoo From
Scratch project).
Post by Yann E. MORIN
I mostly saw my changes as a experimental branch of crosstool, which
would ultimately pick interesting features as they mature, while dumping
the uninteresting ones. So crosstool would be the stable branch, while
my work would serve as a kind of testbed. Hence the name: crosstool-NG,
"NG" for "Next Generation".
Never, at any one time, did I intend this stuff to replace crosstool.
What happened is that, around the time I was working on this, Dan KEGEL
became less and less responsive, and changes sent to the list (by any
one, not just me) took ages to get applied, if they even get applied at
all.
So that was how crosstool-NG was born to the world...
Yup. And from that set of assumptions you've done a fairly good job. What
I'm mostly disagreeing with is your assumptions. (I've thrown out and
restarted my own project several times, because I came to disagree with my
previous interation's initial assumptions. What I was trying to _do_ had
changed, and starting over was the best way to get to my new goal.)

My current codebase is driven by a desire to challenge my own assumptions (not
just is there a better way to do this, but am I trying to do the right
thing?) and a fairly relentless drive to remove stuff. Just because I got it
working and spent six months doing so is no excuse for keeping it if I figure
out how not to need it anymore. Recent-ish case in point:

http://landley.net/hg/firmware/rev/606
Post by Yann E. MORIN
=======================================
2) The way to fullfill the requirements
The first move I made was to first start from scratch. That way, it
sounded to me it would be easier to come up with a good layout of
things.
I'm all for it. :)
Post by Yann E. MORIN
2.a) Ease maintenance
At the heart of crosstool was a single script. In there was all the build
procedures for all the components, from the installing the kernel headers
up to building gdb.
We all start that way. :)

In my case, I separated my design into layers, the four most interesting of
which are:

download.sh - download all the source code and confirm sha1sums
cross-compiler.sh - create a cross compiler for a target.
mini-native.sh - build a root filesystem containing a native toolchain
system-image.sh - package the root filesystem into something qemu can boot

Each of those layers is as independent of the others as I can make it: you
can wget the source yourself without needing download.sh (and it won't
re-download code that's already there with the right sha1sums),
cross-compiler.sh produces a reusable cross compiler you can keep and build
other stuff with, mini-native.sh should be able to use an arbitrary cross
compiler you happen to have installed as long as it can build appropriate
target binaries, and system-image.sh creates a system image out of an
arbitrary directory.

There's a build.sh that runs all the stages in sequence, but it's a fairly
trivial wrapper around the other scripts. (There's another one,
host-tools.sh, called between download.sh and cross-compiler.sh. That one
exists to isolate the build from variations in the host system, but it's
entirely optional and skipping it shouldn't change the results if your host
distro's build environment is reasonable.)
Post by Yann E. MORIN
The first step was to split up this script into smaller ones, each
dedicated to building a single component. This way, I hoped that it would
be easier to maintain each build procedure on its own.
I wound up breaking the http://landley.net/code/firmware/old version into a
dozen or so different scripts. My earlier versions the granularity was too
coarse, in that one the granularity got too fine. I think my current one has
the granularity about right; each script does something interesting and
explainable.

I factored out some common code into scripts/include.sh and
scripts/functions.sh, but it's all things that should be immediately obvious.

For example, the first package build in cross-compiler.sh is:

# Build and install binutils

setupfor binutils build-binutils &&
AR=ar AS=as LD=ld NM=nm OBJDUMP=objdump OBJCOPY=objcopy \
"${CURSRC}/configure" --prefix="${CROSS}" --host=${CROSS_HOST} \
--target=${CROSS_TARGET} --with-lib-path=lib --disable-nls \
--disable-shared --disable-multilib --program-prefix="${ARCH}-" \
--disable-werror $BINUTILS_FLAGS &&
make -j $CPUS configure-host &&
make -j $CPUS CFLAGS="-O2 $STATIC_FLAGS" &&
make -j $CPUS install &&
cd .. &&
mkdir -p "${CROSS}/include" &&
cp binutils/include/libiberty.h "${CROSS}/include"

cleanup binutils build-binutils

The sources/include.sh file sets all those environment variables
(autodetecting $CPUS and getting target-specific information from the target
you selected in sources/targets).

The functions setupfor and cleanup are in sources/functions.sh but setupfor is
mostly doing "tar xvjf packages/binutils-*.tar.bz2" and cd-ing to the
appropriate directory, and cleanup is more or less "rm -rf".

Notice there is _nothing_ target-specific in there. All the target
information is factored out into sources/targets. The build scripts
_do_not_care_ what target you're building for.
Post by Yann E. MORIN
2.b) Ease configuration of the toolchain
In the state, configuring crosstool required editing a file containing
shell variables assignements. There was no proper documentation at what
variables were used, and no clear explanations about each variables
meaning.
The need for a proper way to configure a toolchain arose, and I quite
instinctively turned to the configuration scheme used by the Linux
kernel. This kconfig language is easy to write. The frontends that
then present the resulting menuconfig have limitations in some corner
cases, but they are maintained by the kernel folks.
Yeah, I modified menuconfig for busybox a few years back so the darn visiblity
logic didn't prevent it from writing symbols out to the .config file, so I
could create ENABLE symbols that you could reliably use with if (ENABLE) and
thus rely on dead code elimination instead of littering the code with
#ifdefs. (I recreated that for my toybox project, and fed that code to Mark
for the menuconfig he's using for Gentoo From Scratch.)

I've also been doing miniconfigs for years, even tried to push an improved UI
for them upstream into the kernel (with documentation) at one point:

http://lwn.net/Articles/160497/
http://lwn.net/Articles/161086/

The sources/targets directories in FWL each require three files:

1) miniconfig-uClibc
2) miniconfig-linux
3) details (defines some environment variables describing the target).

And basically what you do is:

./build.sh targetname

Which would try to read config files from "sources/targets/targetname" so it
can build a cross compiler, root filesystem directory, and bootable system
image for that target.
Post by Yann E. MORIN
Again, of with the build scripts, above, I decided to split each components
configuration into separate files, with an almost 1-to-1 mapping.
Of course, there are configuration sections that do not apply to a
specific component, but to the overall toolchain: the place to install
it, the target and its options (BE/LE, CPU variants...). And some options
tell crosstool-NG how to behave: the place to find source tarballs, log
verbosity, and so on...
Ok, a few questions/comments that come to mind here:

1) Why do we have to install your source code? The tarball we download from
your website _is_ source code, isn't it? We already chose where to extract
it. The normal order of operations is "./configure; make; make install".
With your stuff, you have to install it in a second location before you can
configure it. Why? What is this step for?

2) Your configuration menu is way too granular. You ask your users whether or
not to use the gcc "-pipe" flag. What difference does it make? Why ask
this? Is there a real benefit to bothering them with this, rather than just
picking one?

I want to do a more detailed critique here, but I had to reinstall my laptop a
couple weeks ago and my quick attempt to bring up your menuconfig only made
it this far:

./configure --prefix=/home/landley/cisco/crosstool-ng-1.3.2/walrus
Computing version string... 1.3.2
Checking for '/bin/bash'... /bin/bash
Checking for 'make'... /usr/bin/make
Checking for 'gcc'... /usr/bin/gcc
Checking for 'gawk'... not found
Bailing out...

I note that Ubuntu defaults to having "awk" installed, why you _need_ the gnu
version of specifically is something I don't understand. This is an issue
I've bumped into in other contexts, and here's my standard response:

http://lkml.indiana.edu/hypermail/linux/kernel/0701.1/2066.html

I remember from getting crosstool-ng working last time that it wanted a bunch
of other random stuff (none of which my build system needs to make cross
compilers or root filesystems).

For example, you require libtool. Why are you checking for libtool? I note
that libtool exists to make non-elf systems work like ELF, I.E. it's a NOP on
Linux, so it's actually _better_ not to have it installed at all because
libtool often screws up cross compiling. (In my experience, when a project
is designed to do nothing and _fails_ to successfully do it, there's a fairly
high chance it was written by the FSF. One of the things my host-tools.sh
does is make sure libtool is _not_ in the $PATH, even when it's installed on
the host. Pretty much the only things that use it are FSF packages, and they
all have autoconf notice it's not there and skip it. Except for binutils,
which bundles its own version and doesn't use the host's anyway...)

It's 7am here and I haven't been to bed yet, so I'll pause here. I need to
download the new version of crosstool-ng in the morning, fight with getting
it installed again, and pick up from here.

Rob
--
GPLv3 is to GPLv2 what Attack of the Clones is to The Empire Strikes Back.

--
For unsubscribe information see http://sourceware.org/lists.html#faq
nyet
2009-04-05 17:44:41 UTC
Permalink
Post by Yann E. MORIN
I need to
download the new version of crosstool-ng in the morning, fight with getting
it installed again, and pick up from here.
svn update
./configure --local
make
make install

done.


--
For unsubscribe information see http://sourceware.org/lists.html#faq
Rob Landley
2009-04-05 22:45:30 UTC
Permalink
Post by nyet
Post by Yann E. MORIN
I need to
download the new version of crosstool-ng in the morning, fight with
getting it installed again, and pick up from here.
svn update
./configure --local
make
make install
done.
A) Your definition of "done" does not involve having actually configured or
built a toolchain yet. This "done" is equivalent to "I just extracted the
tarball" in any other project.

B) What other project has both ./configure and menuconfig? Generally, you
have one or the other, not both.

C) I didn't check out out of svn. (I don't want to start out with a random
svn snapshot, because I have a knack for picking broken ones. I want to
first try a release version with known bugs, that presumably worked for
somebody at some point.)

D) I was just replicating what I had last time, which was was to install it in
`pwd`/walrus.

E) Why isn't that the default? (Why even _have_ --local be an option?)

F) Running ./configure with no arguments died because awk doesn't call itself
gawk, and that's where I punted until I felt more awake. The "fight with
getting it installed again" was:

1) Run ./configure, watch it die, install gawk.
2) Run ./configure, watch it die, install bison.
3) Run ./configure, watch it die, install flex.
4) Run ./configure, watch it die, try to install makeinfo, realize that
ubuntu hasn't got a package called makeinfo, install its suggestion
of "texi2html", find out that ./configure still claims it hasn't got
makeinfo, do "aptitude search makeinfo" and come up with nothing, think for a
bit remember that ubuntu has a fiddly thing its shell does when you mistype a
command to suggest some random package you could install that will give you
an "sl" command, type makeinfo at the command line, have it suggest texinfo,
install texinfo.
5) Run ./configure, watch it die, install automake.
6) Run ./configure, watch it die, install libtool.

And _then_ it tells me it wants to install in /usr/local, which I can work
around with --prefix=`pwd` or its synonym --local, except I did `pwd`/subdir
because I wanted to see what it was actually installing.

The first thing I'd like to say about the prerequisite packages in step F is
that on this laptop I've already built my Firmware Linux system for armv4l,
armv5l, armv4eb, mips, mipsel, powerpc, powerpc-440, x86, x86-64, sparc, sh4,
m68k, and a couple of hardware variants like the wrt610n and User Mode Linux.
I didn't need _any_ of those packages to build binutils, gcc (with g++),
uClibc, uClibc++, busybox, make, bash, distcc, and the linux kernel.

You should never need the name "gawk", the SUSv4 name is awk:
http://www.opengroup.org/onlinepubs/9699919799/utilities/awk.html

And gawk installs such a symlink. The FSF ./configures will detect "awk" and
use it. The version Ubuntu installs by default is just "awk", the version
busybox provides is also "awk", and they work fine.

The FSF packages ship with cached lex and yacc output, same for autoconf. It
won't regenerate those if the host packages are there, but use the cached
versions. The Linux kernel also has _shipped files for the few things that
involved yacc (mainly the menuconfig stuff), and I believe that actually
won't regenerate those unless told to do so manually.

The "info" documentation format is obsolete. It was the FSF's attempt to
replace man pages 20 years ago, and it utterly failed. There was zero uptake
outside the FSF, partly because their interface was based on "gopher" which
lost out to the web circa 1993.

I helped Eric Raymond debug his "doclifter" project to convert man pages to
docbook, and along the way he was pushing docbook masters upstream into
various packages to that the versions humans edit were docbook and the man
pages just became a legacy output format (and you could produce HTML or PDF
from the same source). If you're wondering why the linux system man pages
are all available on the web, it's because their masters are in docbook:

http://www.kernel.org/doc/man-pages/

The point of all this is Eric talked to the FSF guys about what to do with
the "info" pages, and they already had their own tools to migrate all the
info stuff to docbook, and an in-house plan to do so. (This was circa 2007 I
think.) The fact they haven't actually done it yet may be related to the
fact that their most recent release of "make" was three years ago, and the
gap before that was four years.or the gap between tar 1.13 and 1.14 being
almost five years. (Actual software is secondary to that lot, their
political/religious crusade is what they really care about.)

To make a long story short (too late!), everything that uses makeinfo uses
autoconf, and will skip it if it isn't there.

I mentioned libtool last time. It's a package that exists to make non-elf
libraries behave like elf libraries during the linking phase. Linux has been
a more or less exclusively ELF-based system since 1995, and the way it
handles non-ELF output formats (like the kernel's bzImage or the nommu
binflat stuff) is to first produce an ELF file and then have a conversion
tool create the desired format from the ELF file. (That's why the kernel
creates "vmlinux" on the way to creating bzImage.)

So libtool literally has _nothing_ to do on any Linux system from the past
decade. Again, all the packages that use it are autoconfed and will skip it
if it isn't there, which is a _good_ thing when you're cross compiling
because libtool gets _confused_ easily by cross compiling and you have to go
out of your way to wire around it. One way to make cross compiling easier is
to _uninstall_ things like libtool so they can't screw stuff up.

Rob
--
GPLv3 is to GPLv2 what Attack of the Clones is to The Empire Strikes Back.

--
For unsubscribe information see http://sourceware.org/lists.html#faq
Rob Landley
2009-04-06 02:03:42 UTC
Permalink
Post by Rob Landley
F) Running ./configure with no arguments died because awk doesn't call
itself gawk, and that's where I punted until I felt more awake. The "fight
1) Run ./configure, watch it die, install gawk.
2) Run ./configure, watch it die, install bison.
3) Run ./configure, watch it die, install flex.
4) Run ./configure, watch it die, try to install makeinfo, realize that
ubuntu hasn't got a package called makeinfo, install its suggestion
of "texi2html", find out that ./configure still claims it hasn't got
makeinfo, do "aptitude search makeinfo" and come up with nothing, think for
a bit remember that ubuntu has a fiddly thing its shell does when you
mistype a command to suggest some random package you could install that
will give you an "sl" command, type makeinfo at the command line, have it
suggest texinfo, install texinfo.
5) Run ./configure, watch it die, install automake.
6) Run ./configure, watch it die, install libtool.
Note that part of my complaint here is it doesn't tell you _all_ the things it
needs, it just dies at the first one and forces you to fix each one
individually and repeat the process half a dozen times to find out the rest.

(That's aside from every single one of the above being demonstrably
unnecessary to build a working cross compiler.)
Post by Rob Landley
And _then_ it tells me it wants to install in /usr/local, which I can work
around with --prefix=`pwd` or its synonym --local, except I did
`pwd`/subdir because I wanted to see what it was actually installing.
For comparison purposes, let me tell you how I dealt with this set of problems
on my system. Some of my design goals were:

1) Don't pester the user with unnecessary questions. (Let them override your
defaults, but it should just work out of the box.)

2) Don't ever require root access. Do everything as a normal user.
(Including setting up the environment for build prerequisites.)

3) Isolate the build from variations in the host. The host needs bash to run
the scripts and a working toolchain, ,but don't assume it has _anything_ else
installed.

4) Keep your dependencies down to an absolute minimum. The more you depend
on, the more there is to go wrong.

5) Don't let the build vary if the host has _extra_ things installed. For
example, the ./configure of distcc will build extra functionality if it
detects python is installed. So don't its ./configure _find_ python.
(Without this, your build isn't properly reproducible on different systems
because you don't know what your dependencies actually are. Don't let the
build use anything it doesn't actually need, and doesn't explicitly include.)

Ok, now how's that implemented?

Firmware Linux hasn't got a ./configure stage, or menuconfig. You can
configure it by editing the file "configure", but everything in there is
optional and you shouldn't have to worry about it your first time running the
thing. To use it, you download the tarball, extract it, and
run "./build.sh". (There's a README that walks you through this.)

The second script build.sh runs (after download.sh) is host-tools.sh. (I
mentioned it here last time.) What host-tools.sh does is populate a
directory (build/host) with all the commands the rest of the build will need,
and then sets "PATH=/path/to/build/host" and nothing else. This isolates the
build from variations in the host system. Instead of forcing you to modify
your host to be able to run the build, the build assumes an absolute minimum
environment and builds up from there to what it needs.

Here's how host-tools.sh works:

First, you need a host toolchain in order to build anything else. This is a
chicken and egg problem; we can't build a host toolchain from source unless
we've already got one to build it with. So the one you already have
installed had better work, and building another one is just redundant.

So for the host toolchain, host-tools.sh creates symlinks to the host's
existing commands:

for i in ar as nm cc gcc make ld
do
[ ! -f "${HOSTTOOLS}/$i" ] &&
(ln -s `PATH="$OLDPATH" which $i` "${HOSTTOOLS}/$i" || dienow)
done

Notice it doesn't symlink "strip". The reason for this is that using the host
strip during cross compilation is a common mistake, you're supposed to use
the target $ARCH-strip command. So not having the host strip in the $PATH is
an easy way to catch that one, and it's not hard to adjust the packages we're
building for the host not to need strip (either by building with -s or just
using the unstripped versions on the host, where disk space is easy to come
by).

Then host-tools.sh builds a defconfig busybox and installs all those commands
into build/host. The reason it does this is so that A) we're building with
known versions of all these tools, which have been previously tested and are
known to work, B) it's an easy smoke test to confirm that we can rebuild all
this stuff under the tools we're installing into the target system. (The
resulting build environment will have the busybox versions of these tools.
If we can't build with 'em now, we won't be able to build with them in the
target system, so we need to know whether or not they work.)

Then host-tools.sh builds and installs a few other things (such as distcc,
genext2fs/e2fsprogs, and squashfs) which are really optional and can be
ignored here. You don't need them to build the cross compiler or the root
filesystem. Either ext2 or squashfs is needed to create a system image,
depending on what target filesystem type you select (it defaults to ext2).

Now all of this is optional. You can skip the host-tools.sh script and it
won't adjust the $PATH, in which case cross-compiler.sh, mini-native.sh, and
system-image.sh will use whatever $PATH and whatever tools you have installed
in it. But then it's your problem making sure your host has everything it
needs to have installed (note that by default, ubuntu doesn't even install
the "patch" command), and nothing extra that will confuse the build. Also,
if anything is _going_ to break, it's generally host-tools.sh. Once that's
finished, the rest of the build should run to completion unless something's
_really_ wrong with your environment (out of disk space, out of memory, your
host compiler is broken, etc.)

Also note that all this is implementation details. It's called automatically
by build.sh, and people who just want to kick out a toolchain don't have to
care about any of it.

Rob

P.S. I am oversimplifying what host-tools does slightly. For example, I
skipped over the fact it can build qemu, which is never needed to build
anything I've discussed, but can be used as a smoke test to run a "hello
world" program built with the new cross compiler, and it can also be used to
_run_ the completed system images.

I also skipped over how if you set the environment variable RECORD_COMMANDS
(it's one of the options in the "config" file), then instead of populating a
host directory with new commands, it'll create a directory of symlinks to a
wrapper that logs the command line and then calls the original command. This
way you can grep through the log to see every single command the entire build
ran, and get a list of all the command line utilities it needed so you can
audit what you're actually using. Note that this will miss #!/bin/bash style
script interpreter calls, but there aren't very many of those.

That's one of those "more complexity exists, but you have to dig for it" style
things. You have to read either
http://impactlinux.com/fwl/documentation.html or the comments in "config" to
find out about this at all. Newbies aren't confronted with it.
--
GPLv3 is to GPLv2 what Attack of the Clones is to The Empire Strikes Back.

--
For unsubscribe information see http://sourceware.org/lists.html#faq
Yann E. MORIN
2009-04-06 20:47:29 UTC
Permalink
Rob,
All,

(apologies, forgot to add salutations in my previous mails).
Post by Rob Landley
For comparison purposes, let me tell you how I dealt with this set of problems
It seems it has come to the "mine is bigger than yours" type of argument,
and I won't fight on this. End of story.

You still haven't convinced me that there were _design_ flaws in crosstool-NG.
That you and I don't pursue the same goal is quite obvious. But that's an
entirely different matter.

I do appreciate that you show how you acheived your goals. I may even build
upon this should I need it, or even end up using it as-is if it fits my
needs. But again, that was not the subject. Or so I thought.

Regards,
Yann E. MORIN.
--
.-----------------.--------------------.------------------.--------------------.
| Yann E. MORIN | Real-Time Embedded | /"\ ASCII RIBBON | Erics' conspiracy: |
| +0/33 662376056 | Software Designer | \ / CAMPAIGN | ___ |
| --==< ^_^ >==-- `------------.-------: X AGAINST | \e/ There is no |
| http://ymorin.is-a-geek.org/ | _/*\_ | / \ HTML MAIL | v conspiracy. |
`------------------------------^-------^------------------^--------------------'


--
For unsubscribe information see http://sourceware.org/lists.html#faq
Rob Landley
2009-04-07 07:55:39 UTC
Permalink
Post by Yann E. MORIN
Rob,
All,
(apologies, forgot to add salutations in my previous mails).
Post by Rob Landley
For comparison purposes, let me tell you how I dealt with this set of
It seems it has come to the "mine is bigger than yours" type of argument,
and I won't fight on this. End of story.
Wasn't trying to.
Post by Yann E. MORIN
You still haven't convinced me that there were _design_ flaws in
crosstool-NG. That you and I don't pursue the same goal is quite obvious.
But that's an entirely different matter.
I was trying to understand what your goals _were_. The goal of being able to
insert new packages into arbitrary existing root filesystems without having
to rebuild any of those existing binaries (and without having to statically
link the new binaries) explains a lot of your approach.

It also makes your build system significantly more complicated to configure
and use for something that _is_ building a new system image from scratch, but
at least there's a good reason for it.

Rob
--
GPLv3 is to GPLv2 what Attack of the Clones is to The Empire Strikes Back.

--
For unsubscribe information see http://sourceware.org/lists.html#faq
Yann E. MORIN
2009-04-06 20:39:19 UTC
Permalink
Post by Rob Landley
A) Your definition of "done" does not involve having actually configured or
built a toolchain yet. This "done" is equivalent to "I just extracted the
tarball" in any other project.
Which about to be true! ;-)
Post by Rob Landley
B) What other project has both ./configure and menuconfig? Generally, you
have one or the other, not both.
Except they do not serve the same purpose!

- ./configure is to configure the crosstool-NG "program"

- "menuconfig" is the same as if you fired vi and edited some variables.
Except that it tries to provide a more user-friendly interface.
Post by Rob Landley
C) I didn't check out out of svn. (I don't want to start out with a random
svn snapshot, because I have a knack for picking broken ones. I want to
first try a release version with known bugs, that presumably worked for
somebody at some point.)
Agreed. Now, the 1.3 series is now 4-month old, and the trunk has add quite
a number of enhancements, although not the ones _you_ would expect.
Post by Rob Landley
E) Why isn't that the default? (Why even _have_ --local be an option?)
Well, I meant it to be installable for a two main reasons:

- share the same crosstool-NG so that users on the same system can build
different toolchains for different (or identical) platforms, depending
on their needs. We have a build machine at work, where users log in to
build their software, and some have the right (project-wise) to build
their own toolchains (notably when a new platforms arrives).

- make it packageable by any distribution (I know that it;s been in the
OpenSUSE factory for a while now, even if it's not in the main distro)
I'm planning to make a Debian package (in fact two: one with the core,
named smthg like ct-ng-[version]-noarch.deb, and one with the patchsets,
named ct-ng-data-[version]-noarch.deb)
Post by Rob Landley
F) Running ./configure with no arguments died because awk doesn't call itself
gawk, and that's where I punted until I felt more awake. The "fight with
1) Run ./configure, watch it die, install gawk.
2) Run ./configure, watch it die, install bison.
3) Run ./configure, watch it die, install flex.
4) Run ./configure, watch it die, try to install makeinfo, realize that
ubuntu hasn't got a package called makeinfo, install its suggestion
of "texi2html", find out that ./configure still claims it hasn't got
makeinfo, do "aptitude search makeinfo" and come up with nothing, think for a
bit remember that ubuntu has a fiddly thing its shell does when you mistype a
command to suggest some random package you could install that will give you
an "sl" command, type makeinfo at the command line, have it suggest texinfo,
install texinfo.
5) Run ./configure, watch it die, install automake.
6) Run ./configure, watch it die, install libtool.
So you're suggesting that I continue with the checks, marking all missing
tools, reporting those at the end, and then aborting. Right?

No ./configure I know of behaves like this. So I was quite dumb, and followed
the herd. Thanks to you, a few steps further, I had fallen over the bridge...
Post by Rob Landley
And _then_ it tells me it wants to install in /usr/local, which I can work
around with --prefix=`pwd` or its synonym --local,
*NOOOO!!!*** --prefix=`pwd` is *not* the same as --local! Argghh...
Post by Rob Landley
except I did `pwd`/subdir
because I wanted to see what it was actually installing.
The first thing I'd like to say about the prerequisite packages in step F is
that on this laptop I've already built my Firmware Linux system for armv4l,
armv5l, armv4eb, mips, mipsel, powerpc, powerpc-440, x86, x86-64, sparc, sh4,
m68k, and a couple of hardware variants like the wrt610n and User Mode Linux.
I didn't need _any_ of those packages to build binutils, gcc (with g++),
uClibc, uClibc++, busybox, make, bash, distcc, and the linux kernel.
http://www.opengroup.org/onlinepubs/9699919799/utilities/awk.html
And gawk installs such a symlink. The FSF ./configures will detect "awk" and
use it. The version Ubuntu installs by default is just "awk", the version
busybox provides is also "awk", and they work fine.
No, they don't for me. I'm using GNU extensions. I know its bad. They are
going away...
Post by Rob Landley
The FSF packages ship with cached lex and yacc output, same for autoconf. It
won't regenerate those if the host packages are there, but use the cached
versions. The Linux kernel also has _shipped files for the few things that
involved yacc (mainly the menuconfig stuff), and I believe that actually
won't regenerate those unless told to do so manually.
Ah, but there is a bug in either Gentoo *or* one of the MPFR tarballs. It
works great on me Debian. I have had reports it was successful on Fedora.
I have seen it work seamlessly on OpenSUSE. It broke under Gentoo:
http://sourceware.org/ml/crossgcc/2008-05/msg00080.html
http://sourceware.org/ml/crossgcc/2008-06/msg00005.html
Post by Rob Landley
The "info" documentation format is obsolete. It was the FSF's attempt to
replace man pages 20 years ago, and it utterly failed. There was zero uptake
outside the FSF, partly because their interface was based on "gopher" which
lost out to the web circa 1993.
I don't care. Some components will not build without it, they want to
update their documentation, and I can't spend time on fixing those
suckers. So requiring makeinfo is easier than trying to do without it.
Post by Rob Landley
(Actual software is secondary to that lot, their
political/religious crusade is what they really care about.)
Please, Rob. Please...
Post by Rob Landley
To make a long story short (too late!), everything that uses makeinfo uses
autoconf, and will skip it if it isn't there.
Not true in practice.
Post by Rob Landley
I mentioned libtool last time.
I already answered to that one.
Post by Rob Landley
One way to make cross compiling easier is
to _uninstall_ things like libtool so they can't screw stuff up.
But I can't demand to the end user that he/she removes packages on his machine
that might be usefull to him/her!

Well, you could reverse the argument by saying I can't impose him/her to
install stuff they don't need. But in that case they won't be able to use
crosstool-NG. Unless they come up with a change-request and a proper patch.

After all, if you want to compile some software, you'll need a compiler.
And if you don't want to install that, then you won't be able to compile
your software.

Regards,
Yann E. MORIN.
--
.-----------------.--------------------.------------------.--------------------.
| Yann E. MORIN | Real-Time Embedded | /"\ ASCII RIBBON | Erics' conspiracy: |
| +0/33 662376056 | Software Designer | \ / CAMPAIGN | ___ |
| --==< ^_^ >==-- `------------.-------: X AGAINST | \e/ There is no |
| http://ymorin.is-a-geek.org/ | _/*\_ | / \ HTML MAIL | v conspiracy. |
`------------------------------^-------^------------------^--------------------'


--
For unsubscribe information see http://sourceware.org/lists.html#faq
Rob Landley
2009-04-07 07:50:56 UTC
Permalink
Post by Yann E. MORIN
Post by Rob Landley
A) Your definition of "done" does not involve having actually configured
or built a toolchain yet. This "done" is equivalent to "I just extracted
the tarball" in any other project.
Which about to be true! ;-)
Post by Rob Landley
B) What other project has both ./configure and menuconfig? Generally,
you have one or the other, not both.
Except they do not serve the same purpose!
Agreed, I just find the need for so much configuration you need two complete
configuration systems a bit disturbing.
Post by Yann E. MORIN
- ./configure is to configure the crosstool-NG "program"
- "menuconfig" is the same as if you fired vi and edited some variables.
Except that it tries to provide a more user-friendly interface.
The distinction between the crosstool-NG "program" and actually using
crosstool to build something is one of the things I still don't see the
reason for.

As for what the first ./configure currently does, your "is everything
installed" tests that ./configure currently does could just as easily go in
ct-ng. If you really need some of the output in a Makefile, you could
generate a "Makefile.inc" snippet or some such that only gets regenerated
(and thus the tests re-run) if it's not there.

As for the rest of it, I don't see why you can't skip straight to running
ct-ng out of a freshly extracted tarball without having to
do "./configure --local; make; make install" first.
Post by Yann E. MORIN
Post by Rob Landley
C) I didn't check out out of svn. (I don't want to start out with a
random svn snapshot, because I have a knack for picking broken ones. I
want to first try a release version with known bugs, that presumably
worked for somebody at some point.)
Agreed. Now, the 1.3 series is now 4-month old, and the trunk has add quite
a number of enhancements, although not the ones _you_ would expect.
Possibly it's unfair to critique it then.
Post by Yann E. MORIN
Post by Rob Landley
E) Why isn't that the default? (Why even _have_ --local be an option?)
- share the same crosstool-NG so that users on the same system can build
different toolchains for different (or identical) platforms, depending
on their needs. We have a build machine at work, where users log in to
build their software, and some have the right (project-wise) to build
their own toolchains (notably when a new platforms arrives).
They can't extract a copy into their home directory? (You already have "local
tarballs directory", which is presumably set per .config since menuconig
edits it. So either they're downloading their own copies of all the source
packages or they have to set all their projects to use the shared tarballs
directory...)

I'm a bit confused by the need for special infrastructure so that multiple
users can each build their own copy from source. Don't all source tarballs
work that way? What resources are actually shared?
Post by Yann E. MORIN
- make it packageable by any distribution (I know that it;s been in the
OpenSUSE factory for a while now, even if it's not in the main distro)
I'm planning to make a Debian package (in fact two: one with the core,
named smthg like ct-ng-[version]-noarch.deb, and one with the patchsets,
named ct-ng-data-[version]-noarch.deb)
Again, confused. There are .rpm and .deb packages of every source package out
there, including the ones that don't support building out of tree. And there
are .srpms of things that just come in normal source tarballs, which would
be "most of them"...?

How does this apply?
Post by Yann E. MORIN
Post by Rob Landley
F) Running ./configure with no arguments died because awk doesn't call
itself gawk, and that's where I punted until I felt more awake. The
1) Run ./configure, watch it die, install gawk.
2) Run ./configure, watch it die, install bison.
3) Run ./configure, watch it die, install flex.
4) Run ./configure, watch it die, try to install makeinfo, realize that
ubuntu hasn't got a package called makeinfo, install its suggestion
of "texi2html", find out that ./configure still claims it hasn't got
makeinfo, do "aptitude search makeinfo" and come up with nothing, think
for a bit remember that ubuntu has a fiddly thing its shell does when you
mistype a command to suggest some random package you could install that
will give you an "sl" command, type makeinfo at the command line, have it
suggest texinfo, install texinfo.
5) Run ./configure, watch it die, install automake.
6) Run ./configure, watch it die, install libtool.
So you're suggesting that I continue with the checks, marking all missing
tools, reporting those at the end, and then aborting. Right?
Well, what I'd suggest is that most of those checks are for things the build
doesn't actually seem like it should _need_. (You're installing automake but
not autoconf? Yet you have a menuconfig option to let the config.guess stuff
be overridden? Um... ok?)

But if you're going to do them yes it would be nice to get the full list you
need to install all at once. Just a little niceness to people like me trying
to install it for the first time.
Post by Yann E. MORIN
No ./configure I know of behaves like this. So I was quite dumb, and
followed the herd. Thanks to you, a few steps further, I had fallen over
the bridge...
Post by Rob Landley
And _then_ it tells me it wants to install in /usr/local, which I can
work around with --prefix=`pwd` or its synonym --local,
*NOOOO!!!*** --prefix=`pwd` is *not* the same as --local! Argghh...
Why not? (I honestly don't know. Seemed the same to me...)
Post by Yann E. MORIN
Post by Rob Landley
except I did `pwd`/subdir
because I wanted to see what it was actually installing.
The first thing I'd like to say about the prerequisite packages in step F
is that on this laptop I've already built my Firmware Linux system for
armv4l, armv5l, armv4eb, mips, mipsel, powerpc, powerpc-440, x86, x86-64,
sparc, sh4, m68k, and a couple of hardware variants like the wrt610n and
User Mode Linux. I didn't need _any_ of those packages to build binutils,
gcc (with g++), uClibc, uClibc++, busybox, make, bash, distcc, and the
linux kernel.
http://www.opengroup.org/onlinepubs/9699919799/utilities/awk.html
And gawk installs such a symlink. The FSF ./configures will detect "awk"
and use it. The version Ubuntu installs by default is just "awk", the
version busybox provides is also "awk", and they work fine.
No, they don't for me. I'm using GNU extensions. I know its bad. They are
going away...
*shrug* Ok. (I just knew that the packages you were building didn't need
them, didn't occur to me your scripts would. My bad. Not a lot of people
make extensive use of awk anymore...)
Post by Yann E. MORIN
Post by Rob Landley
The FSF packages ship with cached lex and yacc output, same for autoconf.
It won't regenerate those if the host packages are there, but use the
cached versions. The Linux kernel also has _shipped files for the few
things that involved yacc (mainly the menuconfig stuff), and I believe
that actually won't regenerate those unless told to do so manually.
Ah, but there is a bug in either Gentoo *or* one of the MPFR tarballs. It
works great on me Debian. I have had reports it was successful on Fedora.
http://sourceware.org/ml/crossgcc/2008-05/msg00080.html
http://sourceware.org/ml/crossgcc/2008-06/msg00005.html
So it's a workaround for a bug in gentoo (overly aggressive "sanity check")
and the workaround impacts all platforms. Not an optimal solution, but at
least an understandable one.
Post by Yann E. MORIN
Post by Rob Landley
The "info" documentation format is obsolete. It was the FSF's attempt to
replace man pages 20 years ago, and it utterly failed. There was zero
uptake outside the FSF, partly because their interface was based on
"gopher" which lost out to the web circa 1993.
I don't care. Some components will not build without it, they want to
update their documentation, and I can't spend time on fixing those
suckers. So requiring makeinfo is easier than trying to do without it.
Which ones? (I've never hit that.)
Post by Yann E. MORIN
Post by Rob Landley
To make a long story short (too late!), everything that uses makeinfo
uses autoconf, and will skip it if it isn't there.
Not true in practice.
I'll take your word you hit bugs, but I am curious what they are and how hard
patching them would be.
Post by Yann E. MORIN
Post by Rob Landley
I mentioned libtool last time.
I already answered to that one.
Post by Rob Landley
One way to make cross compiling easier is
to _uninstall_ things like libtool so they can't screw stuff up.
But I can't demand to the end user that he/she removes packages on his
machine that might be usefull to him/her!
That's why I trimmed the $PATH. :)

Your approach is more conventional, but you're also operating under the
constraint of trying to support a big polynomial X*Y*Z many different
possible combinations of different package selections and different package
versions for different target platforms, which you can't possibly hope to
test all of, and the way you've set it up you don't even know what today's
combination will be yet when you run your environment checks, so you can't
even tag "package version x requires automake" if you wanted to get more
granular. You have to support every possibility up front, so requiring a
conservative environment in which you can build every possible combination
makes sense for your build system.

Listing all missing packages at once would be a great convenience and give a
better first impression.
Post by Yann E. MORIN
Well, you could reverse the argument by saying I can't impose him/her to
install stuff they don't need. But in that case they won't be able to use
crosstool-NG. Unless they come up with a change-request and a proper patch.
After all, if you want to compile some software, you'll need a compiler.
And if you don't want to install that, then you won't be able to compile
your software.
You do have to start somewhere.
Post by Yann E. MORIN
Regards,
Yann E. MORIN.
Rob
--
GPLv3 is to GPLv2 what Attack of the Clones is to The Empire Strikes Back.

--
For unsubscribe information see http://sourceware.org/lists.html#faq
Ladislav Michl
2009-04-07 12:39:19 UTC
Permalink
Post by Rob Landley
Post by nyet
Post by Yann E. MORIN
I need to
download the new version of crosstool-ng in the morning, fight with
getting it installed again, and pick up from here.
svn update
./configure --local
make
make install
done.
A) Your definition of "done" does not involve having actually configured or
built a toolchain yet. This "done" is equivalent to "I just extracted the
tarball" in any other project.
B) What other project has both ./configure and menuconfig? Generally, you
have one or the other, not both.
PTXdist (and this project is worth to check as well;
http://www.pengutronix.de/software/ptxdist/index_en.html). And without looking
at crosstool-ng source too closely, it looks like ./configure and menuconfig
are used in the same way. So ./configure && make is used to build the tool itself
and menuconfig serves to configure toolchain options. Doesn't it seem reasonable
enough solution?

Best regards,
ladis

--
For unsubscribe information see http://sourceware.org/lists.html#faq
Rob Landley
2009-04-08 00:29:08 UTC
Permalink
Post by Ladislav Michl
Post by Rob Landley
Post by nyet
Post by Yann E. MORIN
I need to
download the new version of crosstool-ng in the morning, fight with
getting it installed again, and pick up from here.
svn update
./configure --local
make
make install
done.
A) Your definition of "done" does not involve having actually configured
or built a toolchain yet. This "done" is equivalent to "I just extracted
the tarball" in any other project.
B) What other project has both ./configure and menuconfig? Generally,
you have one or the other, not both.
PTXdist (and this project is worth to check as well;
http://www.pengutronix.de/software/ptxdist/index_en.html). And without
looking at crosstool-ng source too closely, it looks like ./configure and
menuconfig are used in the same way. So ./configure && make is used to
build the tool itself and menuconfig serves to configure toolchain options.
Doesn't it seem reasonable enough solution?
I vaguely recall poking at that over the past year, but somebody else had
already installed it.

$ ./configure
checking for ptxdist patches... no
configure: error: install the ptxdist-patches archive into the same
directory as ptxdist.

Why is that a separate tarball? If the base distro can't do _anything_
without the patches, and they're released simulataneously into the same
download directory...

And on top of all the stuff I installed for crosstool, _this_ one wants me to
install "expect". Grumble.

Ok, I configured it, I did "make" and it built menuconfig, which I found odd.
Then I did "make menuconfig" and it said "no rule to make target menuconfig".
(So deeply impressed right now.)

Ok, "./configure --prefix=`pwd`/tmpdir" and then "make install" into that...
and it just copied the patches directory it insisted I untar into its source
directory. Went into tmpdir, ran bin/ptxdist, and it spit out 68 lines
according to wc.

Ok, this is another package doing this, but that doesn't make it right. Why
on _earth_ would you need to install source code before you're allowed to
configure and compile it? The only thing it actually built when I did the
first "make" was the kconfig binaries. The kernel (where kconfig comes from)
does not require you to make and install kconfig before using menuconfig.
Neither do long-term users of it like busybox or uClibc...

Right, humor it:

$ bin/ptxdist menuconfig

ptxdist: error: ptxconfig file is missing
ptxdist: error: please 'ptxdist clone' an existing project

Insane. Right, give it a try:

$ bin/ptxdist clone

Spits out exactly the same help as if you run ptxdist without arguments, ok
scroll down to "clone", which says:

clone <from> <to> create a new project, cloned from <from>.

No clue where the "from" options are. Nothing obvious in the "lib" or "bin"
directories it installed, and I already deleted the source it "installed"
from since it obviously wasn't _using_ it...

If I had to make this work, the next step would be to go to the website and
look for online documentation, or re-extract the source tarball and see what
that had, but I'm afraid I've run out of interest.

Getting back to your original point, "building the tool itself" in this
context apparently means building the menuconfig binary, because bin/ptxdist
itself is a bash script. I have no idea what the first ./configure; make;
make install cycle is expected to accomplish. (Other than hardwiring in
absolute paths you can detect at runtime with the pwd command or perhaps by
some variant of readlink -f "$(which "$0")" if you want to be _more_ flexible
than the make command...)

Will I someday have to compile and install makefiles before I can build the
package they describe? I do not understand what these extra steps
accomplish...


Rob
--
GPLv3 is to GPLv2 what Attack of the Clones is to The Empire Strikes Back.

--
For unsubscribe information see http://sourceware.org/lists.html#faq
Ladislav Michl
2009-04-08 10:22:09 UTC
Permalink
Post by Rob Landley
Post by Ladislav Michl
PTXdist (and this project is worth to check as well;
http://www.pengutronix.de/software/ptxdist/index_en.html). And without
looking at crosstool-ng source too closely, it looks like ./configure and
menuconfig are used in the same way. So ./configure && make is used to
build the tool itself and menuconfig serves to configure toolchain options.
Doesn't it seem reasonable enough solution?
[snip]
Post by Rob Landley
Ok, "./configure --prefix=`pwd`/tmpdir" and then "make install" into that...
and it just copied the patches directory it insisted I untar into its source
directory. Went into tmpdir, ran bin/ptxdist, and it spit out 68 lines
according to wc.
Ok, this is another package doing this, but that doesn't make it right. Why
on _earth_ would you need to install source code before you're allowed to
configure and compile it? The only thing it actually built when I did the
first "make" was the kconfig binaries. The kernel (where kconfig comes from)
does not require you to make and install kconfig before using menuconfig.
Neither do long-term users of it like busybox or uClibc...
You do not need install PTXdist anywhere to start using it. Install part is
optional just in case you want to distribute it as binary tarball to your
colleagues or make (for example) debian package. However I have to admit that
it is non obvious. PTXdist's kconfig is hacked a bit to handle dependencies,
so if you want to express openssh dependency on openssl you do so in Kconfig
file only. The 'ptxdist' script could probably do that as well, but unless
you hack on PTXdist itself you are expected to run ./configure && make
only once.
Post by Rob Landley
$ bin/ptxdist menuconfig
ptxdist: error: ptxconfig file is missing
ptxdist: error: please 'ptxdist clone' an existing project
$ bin/ptxdist clone
Spits out exactly the same help as if you run ptxdist without arguments, ok
clone <from> <to> create a new project, cloned from <from>.
No clue where the "from" options are. Nothing obvious in the "lib" or "bin"
directories it installed, and I already deleted the source it "installed"
from since it obviously wasn't _using_ it...
If I had to make this work, the next step would be to go to the website and
look for online documentation, or re-extract the source tarball and see what
that had, but I'm afraid I've run out of interest.
Well, that's still work in progress...
Post by Rob Landley
Getting back to your original point, "building the tool itself" in this
context apparently means building the menuconfig binary, because bin/ptxdist
itself is a bash script. I have no idea what the first ./configure; make;
make install cycle is expected to accomplish. (Other than hardwiring in
absolute paths you can detect at runtime with the pwd command or perhaps by
some variant of readlink -f "$(which "$0")" if you want to be _more_ flexible
than the make command...)
There is nothing hardwired. ./configure checks prerequisities (and searches for
curses library). Of course it would be nice to have as few prerequisities as
possible and this is limited by amount of human resources. Once upon a time the
idea was to let PTXdist build prerequisities on its own as a host tools, but that
has its own set of problems.
Post by Rob Landley
Will I someday have to compile and install makefiles before I can build the
package they describe? I do not understand what these extra steps
accomplish...
See above.

Anyway, this is starting to be off topic, so in case you want anything to be
improved (and you did few valid points here), fell free to start another
thread called for example "Why PTXdist sucks" (such a subjects tend to
attract attention) to prevent this one from pollution.

Best regards,
ladis

--
For unsubscribe information see http://sourceware.org/lists.html#faq
Rob Landley
2009-04-09 01:13:31 UTC
Permalink
Post by Ladislav Michl
Post by Rob Landley
Post by Ladislav Michl
PTXdist (and this project is worth to check as well;
http://www.pengutronix.de/software/ptxdist/index_en.html). And without
looking at crosstool-ng source too closely, it looks like ./configure
and menuconfig are used in the same way. So ./configure && make is used
to build the tool itself and menuconfig serves to configure toolchain
options. Doesn't it seem reasonable enough solution?
[snip]
Post by Rob Landley
Ok, "./configure --prefix=`pwd`/tmpdir" and then "make install" into
that... and it just copied the patches directory it insisted I untar into
its source directory. Went into tmpdir, ran bin/ptxdist, and it spit out
68 lines according to wc.
Ok, this is another package doing this, but that doesn't make it right.
Why on _earth_ would you need to install source code before you're
allowed to configure and compile it? The only thing it actually built
when I did the first "make" was the kconfig binaries. The kernel (where
kconfig comes from) does not require you to make and install kconfig
before using menuconfig. Neither do long-term users of it like busybox or
uClibc...
You do not need install PTXdist anywhere to start using it.
Uh-huh. Starting from a fresh tarball...

$ make menuconfig
make: *** No rule to make target `menuconfig'. Stop.

$ bin/ptxdist menuconfig
ptxdist: error: PTXdist in /home/landley/ptxdist-1.0.2 is not built.

$ make
make: *** No targets specified and no makefile found. Stop.

$ ./configure
checking for ptxdist patches... yes
checking for gcc... gcc
...
ptxdist version 1.0.2 configured.
Using '/usr/local' for installation prefix.

# Try crosstool-ng's way?

$ ./configure --local
configure: error: unrecognized option: --local
Try `./configure --help' for more information

It seems more accurate to say there might be a non-obvious workaround for the
need to install it before using it.
Post by Ladislav Michl
Install part is
optional just in case you want to distribute it as binary tarball to your
colleagues
Isn't what we downloaded already a tarball? The only "binary" part seems to
be the kconfig binaries...
Post by Ladislav Michl
or make (for example) debian package. However I have to admit
that it is non obvious. PTXdist's kconfig is hacked a bit to handle
dependencies, so if you want to express openssh dependency on openssl you
do so in Kconfig file only.
Doesn't kconfig normally track and enforce dependencies? I thought that was
one of its main functions...
Post by Ladislav Michl
The 'ptxdist' script could probably do that as
well, but unless you hack on PTXdist itself you are expected to run
./configure && make only once.
You don't have to install the linux kernel source code before building a Linux
kernel, and the kbuild infrastructure is more complicated than some entire
embedded operating systems I've seen. The argument "but multiple people may
want to share a patched version to build for several different targets" seems
to apply just as much to the kernel as much to a cross toolchain, yet they
never had a need for this extra step. (You _can_ extract the tarball
in /usr/src/linux and then build out of tree...)

I don't remember uClinux, openwrt, or buildroot expecting you to configure,
make, and install them before being allowed to run menuconfig to set up the
actual build they do. (Admittedly I haven't poked at any of them in the past
week, so maybe I'm forgetting something...) They do the same "download lots
of third party tarballs, compile them resolving dependencies between them,
and integrate the result into a coherent whole" task...

I'm just wondering where this "source code should be installed before you can
compile it, just extracting a tarball isn't good enough, you need to run
configure twice and make twice" meme came from. I still don't understand the
supposed advantage...
Post by Ladislav Michl
Post by Rob Landley
Getting back to your original point, "building the tool itself" in this
context apparently means building the menuconfig binary, because
bin/ptxdist itself is a bash script. I have no idea what the first
./configure; make; make install cycle is expected to accomplish. (Other
than hardwiring in absolute paths you can detect at runtime with the pwd
command or perhaps by some variant of readlink -f "$(which "$0")" if you
want to be _more_ flexible than the make command...)
There is nothing hardwired. ./configure checks prerequisities (and searches
for curses library). Of course it would be nice to have as few
prerequisities as possible and this is limited by amount of human
resources. Once upon a time the idea was to let PTXdist build
prerequisities on its own as a host tools, but that has its own set of
problems.
Tell me about it. (Took me almost two years to make everything work right...)

I understand why crosstool-ng is installing prerequisites now. It's aimed at
performing almost a forensic task: reproducing the exact toolchain that was
used to build an existing binary root filesystem, which some vendor somewhere
had but didn't bother to ship, so you have to reverse engineer it. In that
context, the set of environmental dependencies your build requires really
_can't_ be contained, because you don't really know at ./configure time what
they'll _be_ and there are two many different upstream package versions to
police closely and try to clean up yourself.

This is a task I'm happy to leave to Yann. It's way too fiddly for me... :)
Post by Ladislav Michl
Post by Rob Landley
Will I someday have to compile and install makefiles before I can build
the package they describe? I do not understand what these extra steps
accomplish...
See above.
I read it, I just don't _get_ it.
Post by Ladislav Michl
Anyway, this is starting to be off topic, so in case you want anything to
be improved (and you did few valid points here), fell free to start another
thread called for example "Why PTXdist sucks" (such a subjects tend to
attract attention) to prevent this one from pollution.
Oh I think all modern software sucks. My monday blog entry was, just for
exercise, why my _own_ build system sucks:

http://landley.net/notes.html#06-04-2009

(And that's by no means even close to a complete list, that was just off the
top of my head in the five minutes I was willing to spend typing about it.)

Keep in mind that software in general is still darn new and completely
primitive. The original IBM PC was our industry's "Model T", and that
analogy says we have yet to even invent the 3-point seat belt.

The Model T was 1927, the IBM PC was 1981, which means we're currently circa
1955 or so. Google for "1950's automobile pictures" some time, then tell me
that everything we've currently got hasn't got _huge_ room for
improvement. :)

I'm trying to ask the "Do we really _need_ fins? What about gas mileage? Is
lead really _necessary_ in gasoline?" type questions. Even if everything
(including the stuff I've written) currently gets this sort of thing wrong,
it should still be possible to do _better_...

Rob
--
GPLv3 is to GPLv2 what Attack of the Clones is to The Empire Strikes Back.

--
For unsubscribe information see http://sourceware.org/lists.html#faq
Ladislav Michl
2009-04-10 09:09:42 UTC
Permalink
Post by Rob Landley
Post by Ladislav Michl
You do not need install PTXdist anywhere to start using it.
Uh-huh. Starting from a fresh tarball...
I didn't do that in last few years, so perhaps there is something wrong with
tarballs...
Post by Rob Landley
$ make menuconfig
make: *** No rule to make target `menuconfig'. Stop.
$ bin/ptxdist menuconfig
ptxdist: error: PTXdist in /home/landley/ptxdist-1.0.2 is not built.
~/src/ptxdist-trunk$ grep AC_INIT configure.ac
AC_INIT([ptxdist],[1.99.svn],[***@pengutronix.de])

...ah here it is, seems version differs and indeed there are some newer
tarballs: http://www.pengutronix.de/software/ptxdist/download/v1.99/
Note that 1.0.2 is maintenance release of 2 years old stuff...
Post by Rob Landley
$ make
make: *** No targets specified and no makefile found. Stop.
$ ./configure
checking for ptxdist patches... yes
checking for gcc... gcc
...
ptxdist version 1.0.2 configured.
Using '/usr/local' for installation prefix.
# Try crosstool-ng's way?
$ ./configure --local
configure: error: unrecognized option: --local
Try `./configure --help' for more information
It seems more accurate to say there might be a non-obvious workaround for the
need to install it before using it.
~/src/ptxdist-trunk$ ./configure && make
~/src/ptxdist-trunk$ cd ../ptx-test/
~/src/ptx-test$ ../ptxdist-trunk/bin/ptxdist menuconfig

Now it will bitch at you again about selected platform config etc...
So those two years do not represent any progress for the point you are
trying to make. Shame ;-)
Post by Rob Landley
Post by Ladislav Michl
Install part is
optional just in case you want to distribute it as binary tarball to your
colleagues
Isn't what we downloaded already a tarball? The only "binary" part seems to
be the kconfig binaries...
Post by Ladislav Michl
or make (for example) debian package. However I have to admit
that it is non obvious. PTXdist's kconfig is hacked a bit to handle
dependencies, so if you want to express openssh dependency on openssl you
do so in Kconfig file only.
Doesn't kconfig normally track and enforce dependencies? I thought that was
one of its main functions...
...and still is, I just didn't express it clear. If feature FOO depends on BAR,
you compile both and link them together. No matter which one gets compiled first.
Now when program FOO depends on library BAR you need to compile that library
first to let FOO link against it which is something 'plain' kconfing does not
handle.
Post by Rob Landley
Post by Ladislav Michl
The 'ptxdist' script could probably do that as
well, but unless you hack on PTXdist itself you are expected to run
./configure && make only once.
[snip]
Post by Rob Landley
I'm just wondering where this "source code should be installed before you can
compile it, just extracting a tarball isn't good enough, you need to run
configure twice and make twice" meme came from. I still don't understand the
supposed advantage...
I'm not going to advocate this design decision (as I can live with that). From
developers point of view it ensures a user runs expected environment. Try to
look at it from this perspective. You have configure script and it is there to
check your environment, so once it is already there as ordinary user you would
expect it will also produce some makefile and that make file will so something.
And it really does. Could it be done different way? Yes it could, just this one
is easy to implement the goal: check user environment. And it does exactly
this. Compiling mconf is there just to save one probably irrelevant step when
running ptxdist script. Once prerequisities list gets short enough to contain
only stuff everyone is supposed to have, then it will probably turn to be
easier to dump all this configure thing away, do it 'kernel way' and live
happily ever after. Um, didn't I say I'm not going to advocate it? (My
point of view may differ from PTXdist main developers)

[snip]
Post by Rob Landley
Post by Ladislav Michl
Anyway, this is starting to be off topic, so in case you want anything to
be improved (and you did few valid points here), fell free to start another
thread called for example "Why PTXdist sucks" (such a subjects tend to
attract attention) to prevent this one from pollution.
Oh I think all modern software sucks. My monday blog entry was, just for
http://landley.net/notes.html#06-04-2009
(And that's by no means even close to a complete list, that was just off the
top of my head in the five minutes I was willing to spend typing about it.)
[snip]
Post by Rob Landley
I'm trying to ask the "Do we really _need_ fins? What about gas mileage? Is
lead really _necessary_ in gasoline?" type questions. Even if everything
(including the stuff I've written) currently gets this sort of thing wrong,
it should still be possible to do _better_...
I'm wondering, why you (a computer guys) are always trying to compare software
development to automotive industry without having any deeper clue about it.
Pretty please leave away those analogies as they are not completly analogic
and do not serve any real purpose.

Thank you,
ladis

--
For unsubscribe information see http://sourceware.org/lists.html#faq
Rob Landley
2009-04-10 23:41:12 UTC
Permalink
Post by Ladislav Michl
Post by Rob Landley
Post by Ladislav Michl
You do not need install PTXdist anywhere to start using it.
Uh-huh. Starting from a fresh tarball...
I didn't do that in last few years, so perhaps there is something wrong
with tarballs...
I do that every release, and I try to make quarterly releases because this
guy's talk convinced me it was a good thing:

http://video.google.com/videoplay?docid=-5503858974016723264
Post by Ladislav Michl
Post by Rob Landley
$ make menuconfig
make: *** No rule to make target `menuconfig'. Stop.
$ bin/ptxdist menuconfig
ptxdist: error: PTXdist in /home/landley/ptxdist-1.0.2 is not built.
~/src/ptxdist-trunk$ grep AC_INIT configure.ac
...ah here it is, seems version differs and indeed there are some newer
tarballs: http://www.pengutronix.de/software/ptxdist/download/v1.99/
Note that 1.0.2 is maintenance release of 2 years old stuff...
Yeah, except the website called 1.0.2 the stable release and 1.99 the
development branch. (I tend to think of "development branch" as "today's
source control snapshot". I might play with it _after_ I get stable
working.)
Post by Ladislav Michl
Post by Rob Landley
It seems more accurate to say there might be a non-obvious workaround for
the need to install it before using it.
~/src/ptxdist-trunk$ ./configure && make
~/src/ptxdist-trunk$ cd ../ptx-test/
~/src/ptx-test$ ../ptxdist-trunk/bin/ptxdist menuconfig
Now it will bitch at you again about selected platform config etc...
So those two years do not represent any progress for the point you are
trying to make. Shame ;-)
Regression testing your newbie intro paths is something you have to do on a
regular basis. (They bit-rot, because the _developers_ never need them...)
Post by Ladislav Michl
Post by Rob Landley
Post by Ladislav Michl
Install part is
optional just in case you want to distribute it as binary tarball to
your colleagues
Isn't what we downloaded already a tarball? The only "binary" part seems
to be the kconfig binaries...
Post by Ladislav Michl
or make (for example) debian package. However I have to admit
that it is non obvious. PTXdist's kconfig is hacked a bit to handle
dependencies, so if you want to express openssh dependency on openssl
you do so in Kconfig file only.
Doesn't kconfig normally track and enforce dependencies? I thought that
was one of its main functions...
...and still is, I just didn't express it clear. If feature FOO depends on
BAR, you compile both and link them together. No matter which one gets
compiled first. Now when program FOO depends on library BAR you need to
compile that library first to let FOO link against it which is something
'plain' kconfing does not handle.
So you're generating makefile dependencies from the kconfig dependencies...?
Post by Ladislav Michl
Post by Rob Landley
I'm trying to ask the "Do we really _need_ fins? What about gas mileage?
Is lead really _necessary_ in gasoline?" type questions. Even if
everything (including the stuff I've written) currently gets this sort of
thing wrong, it should still be possible to do _better_...
I'm wondering, why you (a computer guys) are always trying to compare
software development to automotive industry without having any deeper clue
about it.
I wouldn't call myself an automotive _expert_, but why do you assume I haven't
got _any_ deeper clue about the industry?

Would you like me to explain how an internal combustion engine works? Put you
in touch with my friends in Michigan who are unemployed due to the layoffs at
Ford and General Motors? Give the whole spiel about how electric golf carts
had the potential to be a standard disruptive technology naturally
supplementing and then replacing gasoline powered vehicles but were precluded
from going outside apartment complexes and such due to regulatory barriers
restricting what could become "street legal" (and yes I'm aware of the power
to weight ratio advantage inherent in fuels that use atmospheric oxygen as
half their reaction mass, and yet laptops are the main commercial driver of
battery technology today and they care at least as much about weight to power
ratio). How biodiesel sounds like a great idea until you realize that if all
our arable land switched over to biodiesel prodution and we _stopped_eating_
we still wouldn't have replaced all the diesel we currently use? Although at
least it's a net producer of energy, as opposed to corn ethanol which is just
a lossy storage mechanism nobody would take seriously if it wasn't subsidized
by _stupid_ federal programs.

Should I go into the parallels between the 1970's and 2000's for the american
auto industry, and why SUVs were the second coming of the fin-covered gas
guzzlers of the earlier generation and the _continuing_ refusal to focus on
fuel efficient vehicles (yes lower margin but the demand isn't nearly as
elastic, and you _knew_ peak production of oil was coming from the drilling
records back in the 80's, yes the oil companies lied about their reserves and
quietly restated them in the wake of Enron/Worldcom/tyco/global crossing etc
but that was 2001, you had seven years warning to retool. If you think the
EV1 was mishandled, why the heck did saturn start making SUVs when the POINT
of saturn was to compete with Japanese and German sedans and subcompacts?
Japan's lost decade should have been a _gift_, not an excuse to do an upward
retreat _away_ from things like the Mini Cooper and the relaunched Volkswagen
Feature (it's not a bug, I don't care what they say). What did they do in
the past 10 years that they're _proud_ of, an HV3 you can't actually take
off-road without doing 10s of thousands of dollars of damage to the thing?
Go ahead, put chrome and fins on it already!)

Anyway, all that's _deeply_ off topic for this list. Email me if you're bored
enough to want to go into it.
Post by Ladislav Michl
Pretty please leave away those analogies as they are not
completly analogic and do not serve any real purpose.
Automobiles are the most complicated piece of reasonably mature technology
that ordinary individuals use on a daily basis. You need training and
certification to operate them, they have extensive maintenance requirements
their users need to be aware of (gas, oil, brake pads), but most people
aren't mechanics and aren't expected to be, even though system failures can
maim or kill. Despite that, we take them for granted, expect everybody to
learn to use them as a teenager, and lots of families have two.

If you want to know how people will think about computers once they've been
around for 100 years, the automobile is an obvious model because very little
else has been _around_ for 100 years. (The light bulb and the rotary dial
telephone started about the same era, but neither were in the same complexity
category. We're not expected to _operate_ either in a nontrivial way.)

Rob
--
GPLv3 is to GPLv2 what Attack of the Clones is to The Empire Strikes Back.

--
For unsubscribe information see http://sourceware.org/lists.html#faq
Ladislav Michl
2009-04-13 22:37:25 UTC
Permalink
Post by Rob Landley
Post by Ladislav Michl
...and still is, I just didn't express it clear. If feature FOO depends on
BAR, you compile both and link them together. No matter which one gets
compiled first. Now when program FOO depends on library BAR you need to
compile that library first to let FOO link against it which is something
'plain' kconfing does not handle.
So you're generating makefile dependencies from the kconfig dependencies...?
Yes.
Post by Rob Landley
Post by Ladislav Michl
Post by Rob Landley
I'm trying to ask the "Do we really _need_ fins? What about gas mileage?
Is lead really _necessary_ in gasoline?" type questions. Even if
everything (including the stuff I've written) currently gets this sort of
thing wrong, it should still be possible to do _better_...
I'm wondering, why you (a computer guys) are always trying to compare
software development to automotive industry without having any deeper clue
about it.
I wouldn't call myself an automotive _expert_, but why do you assume I haven't
got _any_ deeper clue about the industry?
Would you like me to explain how an internal combustion engine works?
No, of course. I do expect many people on this list to be able to design
computer program, but only few, if any, to be able to design an internal
combustion engine. Sure people have _some_ deeper clue about automotive
industry, but I would not assume it is comparable with their computer science
experience. And note, that to make your point such a comparsion was not even
needed, so I would use Occam's razor here and simply drop it.
Post by Rob Landley
in touch with my friends in Michigan who are unemployed due to the layoffs at
Ford and General Motors?
And this is plain call for flame war. These companies deserve do die, as their
products lack any invention at all. Used them, hated them and would have to
loose the rest of my intellect to even think about acquiring such a crap. I
hope computer industry can do better and yes, bad examples educate better. So,
lets go back to software design, as there we can actually _do_ something better
for the tools many programmers work with.
Post by Rob Landley
Post by Ladislav Michl
Pretty please leave away those analogies as they are not
completly analogic and do not serve any real purpose.
Automobiles are the most complicated piece of reasonably mature technology
that ordinary individuals use on a daily basis. You need training and
certification to operate them, they have extensive maintenance requirements
their users need to be aware of (gas, oil, brake pads), but most people
aren't mechanics and aren't expected to be, even though system failures can
maim or kill. Despite that, we take them for granted, expect everybody to
learn to use them as a teenager, and lots of families have two.
If you want to know how people will think about computers once they've been
around for 100 years, the automobile is an obvious model because very little
else has been _around_ for 100 years. (The light bulb and the rotary dial
telephone started about the same era, but neither were in the same complexity
category. We're not expected to _operate_ either in a nontrivial way.)
Well, that's all true, except this is developers mailing list and as a such
it is not targeted to the user. You might argue that desing principles should
be the same no matter if targeting to user or developer, but I see no point
for such argumentation to the people (software developers) who are doing the
same job for software development as for example people designing shock
absorbers do for automotive. Products of both have no use on its own and need
other engeneers to make them part of something usefull.

And if you really need something to compare, what about washing maschine? It
is simple enough, so people can realize its internals easily, it is long
enough around, it is reliable and it helped to change the society as women
suddenly got a lot of time to invest outside housekeeping ;-)

We are offtopic, indeed, so lets stop it. As I'm happy PTXdist user and I
think its design is in general good, I want others to give it a try and write
down their suggestions. And you did it pretty well, Rob.

Thank you,
ladis (who will, once time permit, work on cutting down
prerequisities list and eliminating ./configure && make)

--
For unsubscribe information see http://sourceware.org/lists.html#faq
Ladislav Michl
2009-04-14 11:08:19 UTC
Permalink
Post by Rob Landley
Automobiles are the most complicated piece of reasonably mature technology
that ordinary individuals use on a daily basis. You need training and
certification to operate them, they have extensive maintenance requirements
their users need to be aware of (gas, oil, brake pads), but most people
aren't mechanics and aren't expected to be, even though system failures can
maim or kill. Despite that, we take them for granted, expect everybody to
learn to use them as a teenager, and lots of families have two.
If you want to know how people will think about computers once they've been
around for 100 years, the automobile is an obvious model because very little
else has been _around_ for 100 years. (The light bulb and the rotary dial
telephone started about the same era, but neither were in the same complexity
category. We're not expected to _operate_ either in a nontrivial way.)
Ah... Completely forgot to mention probably key issue here (And then I'll
try to keep silent on this offtopic ;-)). Automobile is a device with a sole
purpose to move you from place to place. It does it from the very beginning
and it does it very same way. Over those one hundred years nothing really
changed. Yes, driving comfort and speed differ, but automobile as a such
still does the same - it moves you.

Computer as a device changeid a lot during less than half of that time (not
counting mechanical computers). MEDA 42 TA [1] programs different way than
IQ 151 [2] and each is better for different sort of tasks.

Now once you look at modern computer, it can be told to solve very different
tasks even those which were hardly imaginable in late 60's. And it is its
variability which make comparsion computer vs automobile pretty useless.
Sure you can compare anything to anything but is that really worht doing so?
Look at DVD players, most of them does its job pretty well, are very easy to
operate (even my grandparents can handle that while they are clueless sitting
in front of keyboard) and yet noone compares DVD player to automobile although
hardware inside is a decent computer. And this is the whole point. What is
comparing a computer (doing nothing and everything) and the automobile (doing
still the same) good for?

Sorry to say, I do not see analogy there even from user perspective.

ladis

[1] Loading Image...
[2] http://cs.wikipedia.org/wiki/IQ_151

--
For unsubscribe information see http://sourceware.org/lists.html#faq
Rob Landley
2009-04-05 22:09:45 UTC
Permalink
Post by Rob Landley
http://landley.net/notes-2008.html#07-03-2009
Meaning http://landley.net/notes-2009.html#07-03-2009 I presume!
:)
M
Yes. (I'm still writing 2008 on my checks, too.)

Thanks,

Rob
--
GPLv3 is to GPLv2 what Attack of the Clones is to The Empire Strikes Back.

--
For unsubscribe information see http://sourceware.org/lists.html#faq
Yann E. MORIN
2009-04-06 20:11:03 UTC
Permalink
Post by Rob Landley
Post by Yann E. MORIN
This post is to present the overall design of crosstool-NG, how and why
I came up with it, and to eventually serve as a base for an open discussion
on the matter.
For comparison, my Firmware Linux project also makes cross compilers,
So, it boils down to comparing _your_ project with _mine_.
I am all for comparing, but I'm afraid we do not have the same goals.

You've argued against cross-compiling ore often than not, and you did make
good points at it. And, from what I understand, FWL is your proof of concept
that cross-compiling can be avoided.

And you proved it. Partially. But that's not the point.

Just my turn to rant a little bit ;-) For many, cross-compiling can't be
avoided. Running under an emulator is biased. Your ./configure might
detect some specifics of the machine it's running on (the emulator) that
might prove wrong on the real hardware, Or the other way around, miss
some specifics of the real hardware that the emulator does not provide.

So, we're stuck. Or are we?

( Just a side note before I go on. I do not use the same meanings as you
do for the following words:
build (machine) : the machine that _builds_ stuff
host (machine) : the machine that _runs_ stuff
target (machine): the machine the stuff _generates_ code for

For a compiler (binutils, gcc, gdb), the three make sense, but for other
programs (glibc, bash, toybox...), only the first two do.
)

One of the problem I can see with FWL is how you end up with a firmware
image that runs on the target platform (say my WRT54GL ;-) ), and contains
only what is needed to run it, without all that native build environment
that will definitely not fit in the ~7MiB available in there.

My point of view is that:
1- you need correct build tools, of which a correctly isolated toolchain
2- you build your packages and installs them in a rootfs/ directory
(rootfs/ will contain only your packages instaleld files, and is missing
the toolchain libs)
3- you use a copy of rootfs/ which you populate with libs from the toolchain
4- you use that populated copy to build your firmware images

Of course, if your packages are not cross-compile friendly, you may have
problems. But nowadays, most common packages do cross-compile neatly. I
have seen only a few of them requiring carefully crafted ./configure options
or a few patches here and there (ltrace is such a sucker).

For the records, I have some experience in that field as well ;-), as I've
been doing exactly this stuff for the past for years as my day-time job,
and I've played with LFS and cross-LFS for the previous three years or so.

Note: crosstool-NG was *not* written at mu day-time job, but on my own
spare time (which gave some frictions here at home from time to time...
Post by Rob Landley
so I
have some experience here too.
And, I do acknowledge your experience and your technical merit.
You know it, let's others know it as well. :-)
Post by Rob Landley
My project is carefully designed in layers so you don't have to use the cross
compilers I build. It should be easy to use crosstool-ng output to build the
root filesystems and system images that the later scripts produce. (How easy
it actually is, and whether there's any benefit in doing so, is something I
haven't really looked into yet.) The point is the two projects are not
actually directly competing, or at least I don't think they are.
The main goals differ, but the underlying reason is the same: be able to
build stuff that will run on an alien machine. crosstool-NG is limited to
building the required tools (actual compiler, plus a few debug utilities),
while FWL aims at building a native build environment.
Post by Rob Landley
I came at it from a different background. I was playing with Linux From
Scratch almost from the beginning,
LFSer as well in the 2001-2004 era. Went as far as using it as my daily
workstation using KDE. Yeah, I was *that* insane at the time. But taught
me a whole lot in the end.

[--SNIP the genesis of FWL--]
Post by Rob Landley
I looked at crosstool circa 2004-ish, but was turned off by the way it
replicated huge amounts of infrastructure for every single dot release of
every component. (I remember it having separate patches, separate build
scripts, and so on. I don't even remember what it did per-target.)
How can you avoid having one patchest for each version of each component?
Of course, FWL uses only the latest versions availabe (which is wrong, it
still uses gcc-4.1.2 for philosophical reasons) of a few set of packages
(namely: binutils, gcc, uClibc, linux for the toolchain, I don't care about
the other tools, which do not belong to the toolchain)
Post by Rob Landley
I wanted something generic. A single set of source code, a single set of
build scripts, with all the variations between them kept as small, simple,
and contained as possible.
Correct here.
Post by Rob Landley
I actually ended up not basing anything off of crosstool. Instead when Cross
Linux From Scratch came out, I learned from that, and by asking a lot of
questions of coworkers at an embedded company I worked at for a year
(Timesys). But this was 2006, so it was after you'd already started with
this.
Which is not a reason why I should not review what I've done today, no more
it is that I should.
Post by Rob Landley
http://landley.net/writing/docs/cross-compiling.html
But, of all packages I've been using, most are *now* cross-compile friendly
(with some notable exceptions) and the ones that gave me the most headaches
where the ones coming from company who don't grok the terms "open" and "free
as in speech". *Those* were real suckers.
Post by Rob Landley
Post by Yann E. MORIN
a- ease overall maintenance
b- ease configuration of the toolchain
c- support newer versions of components
d- add new features
e- add alternatives where it was available
Can't really argue with those goals, although mostly because they're a bit
vague.
What do you mean, "vague" (I understand the word, it's the same in french)?
The question is really: what in the above list qualifies as "vague"?
Post by Rob Landley
My current build system has very careful boundaries.
Call it an API?
Post by Rob Landley
This is not only because my first couple systems grew out of control
(adding more and more packages and more and more features), but because I
watched Erik's buildroot explode from a test harness for uClibc into an
accidental Linux distribution.
[--SNIP--]

Don't speak buildroot with me. Please, just don't. ;-)
I totaly agree with you on the subject, let's close it.
Post by Rob Landley
This is why my current system is very carefully delineated. I know exactly
what it does NOT do. It builds the smallest possible system capable of
rebuilding itself under itself. I.E. it bootstraps a generic development
environment for a target, within which you can build natively. It has to do
some cross compiling to do this, but once it's done you can _stop_ cross
compiling, and instead fire up qemu and build natively within that.
Except that it in fact does cross-compiling, as it is escaping the qemu
via distcc to call the cross tools on the build machine. :-/

[--SNIP--]
Post by Rob Landley
Post by Yann E. MORIN
So that was how crosstool-NG was born to the world...
Yup. And from that set of assumptions you've done a fairly good job.
Accepted, thank you! ;-)
Post by Rob Landley
What
I'm mostly disagreeing with is your assumptions.
There are two things:
- the goal
- the assumptions made to reach that goal

Both make the "why". What I came up with makes the "how".

As for the goal, I wanted to be able to build dependable (cross-)toolchains.
On the assumptions, I saw that I could not rely on binutils/gcc/glibc/... to
build easily (which is quite the case), and that I need a kind of framework
to make them build seamlessly.

No we can discuss this, but the original discussion was not to see if the
"how" was the best way to answer the "why".
Post by Rob Landley
In my case, I separated my design into layers, the four most interesting of
download.sh - download all the source code and confirm sha1sums
cross-compiler.sh - create a cross compiler for a target.
crosstool-NG stops here. And strives at offering more options than your
solution.

A great many people are stuck with a specific version of any one or more
or the components (gcc/glibc/...) for historical reasons I am not ready
to discuss. Having a tool that can not cope with earlier versions is
not helpfull.
Post by Rob Landley
mini-native.sh - build a root filesystem containing a native toolchain
system-image.sh - package the root filesystem into something qemu can boot
Those two _use_ the above, htey are not part of it.
Post by Rob Landley
Post by Yann E. MORIN
The first step was to split up this script into smaller ones, each
dedicated to building a single component. This way, I hoped that it would
be easier to maintain each build procedure on its own.
I wound up breaking the http://landley.net/code/firmware/old version into a
dozen or so different scripts. My earlier versions the granularity was too
coarse, in that one the granularity got too fine. I think my current one has
the granularity about right; each script does something interesting and
explainable.
So are each scripts in scripts/build/ : they are dedicated to building a
single piece of the toolchain, and each can be replaced without the others
noticing (or so it should be the case).
Post by Rob Landley
Notice there is _nothing_ target-specific in there. All the target
information is factored out into sources/targets. The build scripts
_do_not_care_ what target you're building for.
That's the same in crosstool-NG: the configuration and the wrapper scripts
set up some variables that the build scripts rely upon to build their stuff.

The target specific configuration is generic, but can be overidden by
target-specofic code (eg. ARM can overide the target tupple to append
"eabi" to the tupple if EABi is enabled, and to not add it if not enabled;
this can *not* be done in a generic way, as not all architectures
behave the same in this respect).
Post by Rob Landley
1) Why do we have to install your source code? The tarball we download from
your website _is_ source code, isn't it? We already chose where to extract
it. The normal order of operations is "./configure; make; make install".
With your stuff, you have to install it in a second location before you can
configure it. Why? What is this step for?
I don't understand. crosstool-NG is like any other package:
- it requires some pre-existing stuff in your environemnt, hence the
"./configure" step
- you have to build it, hence the "make" step (although this is only
sed-ing a few place-holders here and there)
- you have to install it to use it, hence the "make install" step
- you add its location/bin to the PATH

Then you can run ct-ng, from anywhere and you build your toolchain.
I mat repeat myself, but do you expect to build your own program in the
gcc source tree?
Post by Rob Landley
2) Your configuration menu is way too granular. You ask your users whether or
not to use the gcc "-pipe" flag. What difference does it make? Why ask
this? Is there a real benefit to bothering them with this, rather than just
picking one?
I will answer this in an answer to your other post, if you will.
Post by Rob Landley
I want to do a more detailed critique here, but I had to reinstall my laptop a
couple weeks ago and my quick attempt to bring up your menuconfig only made
./configure --prefix=/home/landley/cisco/crosstool-ng-1.3.2/walrus
Computing version string... 1.3.2
Checking for '/bin/bash'... /bin/bash
Checking for 'make'... /usr/bin/make
Checking for 'gcc'... /usr/bin/gcc
Checking for 'gawk'... not found
Bailing out...
I note that Ubuntu defaults to having "awk" installed, why you _need_ the gnu
version of specifically is something I don't understand.
I could not make my awk script work with mawk, which is the default under
the obscur distribution I am using (Debian, I think). So I fallback to
installing gawk. But that was an *enourmous* error. Its main use it to try
to build a correct tsocks setup given the options. *That* is purely insane.
It should be going away. No, it /should/ not be going away. It *is* going
away.

The fact that I shoehorned proxy settings in crosstool-NG is an error,
granted, but because I'm using GNU extensions in there, so I must check
for GNU awk.
Post by Rob Landley
For example, you require libtool. Why are you checking for libtool?
crosstool-NG itself does not require libtool. The components that it builds
will use it if they find it. But if the version is too old, the build will
break (I think it was mpfr at fault there, but am not sure), instead of
simply ignoring it.

So I have also to ensure a correct environment for the components I build.
Post by Rob Landley
I note
that libtool exists to make non-elf systems work like ELF, I.E. it's a NOP on
Linux, so it's actually _better_ not to have it installed at all because
libtool often screws up cross compiling.
But what if it *is* already installed? The end-user is feree to install
whatever he/she wants on his/her computer, no? It's just that I want to
check that the environment is sane before going on any further.

The fact that libtool sucks is totaly irrelevant to the problem.
And yes, it sucks.
Post by Rob Landley
(In my experience, when a project
is designed to do nothing and _fails_ to successfully do it, there's a fairly
high chance it was written by the FSF. One of the things my host-tools.sh
does is make sure libtool is _not_ in the $PATH, even when it's installed on
the host.
Oh, come-on... ;-) My libtool is in /usr/bin. Do you want to remove /usr/bin
from my PATH? You'll end-up missing a lot of stuff, in this case.

OK, so now moving on to answer your other post(s)... Took me about two hours
trying to answer this one... :-(

Regards,
Yann E. MORIN.
--
.-----------------.--------------------.------------------.--------------------.
| Yann E. MORIN | Real-Time Embedded | /"\ ASCII RIBBON | Erics' conspiracy: |
| +0/33 662376056 | Software Designer | \ / CAMPAIGN | ___ |
| --==< ^_^ >==-- `------------.-------: X AGAINST | \e/ There is no |
| http://ymorin.is-a-geek.org/ | _/*\_ | / \ HTML MAIL | v conspiracy. |
`------------------------------^-------^------------------^--------------------'


--
For unsubscribe information see http://sourceware.org/lists.html#faq
Rob Landley
2009-04-07 07:07:37 UTC
Permalink
Post by Yann E. MORIN
Post by Rob Landley
Post by Yann E. MORIN
This post is to present the overall design of crosstool-NG, how and why
I came up with it, and to eventually serve as a base for an open
discussion on the matter.
For comparison, my Firmware Linux project also makes cross compilers,
So, it boils down to comparing _your_ project with _mine_.
I am all for comparing, but I'm afraid we do not have the same goals.
I'm using my project as an example of how I would have designed a system to
solve similar problems. It's probably a bad habit, but it's easy to fall
back on a context in which I've already worked through these issues to my own
satisfaction.
Post by Yann E. MORIN
You've argued against cross-compiling ore often than not, and you did make
good points at it. And, from what I understand, FWL is your proof of
concept that cross-compiling can be avoided.
And you proved it. Partially. But that's not the point.
That's not even the point I was trying to make here.

My cross-compiler.sh script builds reusable cross compilers. It's wrapped to
be relocatable (so you can extract the prebuilt binary into an arbitrary
location, such as your user home directory, and use it from there), and it's
tarred up and uploaded as a set of prebuilt binary tarballs which are
compiled for 32-bit x86 and statically linked against uClibc, which is about
as portable as I can make them:

http://impactlinux.com/fwl/downloads/binaries/cross-compiler/host-i686/

(That should run on 32 bit hosts, and on 64 bit hosts without even the 32-bit
support libraries installed. It does require a 2.6 kernel, though.)

I wouldn't have bothered to wrap, package, and upload them if I didn't expect
people to want to use those outside the context of building mini-native and
system-image files.

Part of what I'm trying to point out is that the script that creates those
cross compilers is 150 lines of bash. (There's supporting code which
provides functions for conceptually simple and irrelevant things such as
downloading and extracting source tarballs, rm -rf them afterwards, checking
for errors, creating a tarball of the resulting binaries, and so on. But I
tried very hard to make sure you don't have to read any of the supporting
code to understand and modify what the script is doing.)

If someone wants to to do cross compiling, and prefers to build their cross
compiler from source instead of downloading prebuilt binary toolchains, the
only configuration decision that user is required to make is "what target do
you want". Every other configuration knob is optional and they're not
presented with them unless they go looking for them.

That's as simple as I know how to make it, both in implementation and to set
up and use it.

The "build natively under emulation" thing is a complete red herring in the
comparison I'm trying to make. I get easily distracted into talking about it
because I've spent so much time working on it, but it's not what I'm _trying_
to talk about here.
Post by Yann E. MORIN
Just my turn to rant a little bit ;-)
Go for it!
Post by Yann E. MORIN
For many, cross-compiling can't be
avoided. Running under an emulator is biased. Your ./configure might
detect some specifics of the machine it's running on (the emulator) that
might prove wrong on the real hardware, Or the other way around, miss
some specifics of the real hardware that the emulator does not provide.
This is back on the "to cross compile or not" tangent, but since you asked:

That's 99% a kernel issue. Your ./configure primarily probes for userspace
packages you have installed (what library APIs can I call and link against),
and those don't really depend on hardware. That's just what's installed in
your root filesystem.

You could just as easily say "I can't build this package on a standard PC
server because this package is designed to work on a laptop with a battery it
can monitor, and an accelerometer and a 3D card and wireless internet, none
of which this server has". If that sort of thing stopped you from _building_
it, distros like Red Hat would have trouble supporting laptops.

When building natively the base architecture should match, but the associated
peripherals don't matter much during the build. (You can't test the result
without that hardware, but you should be able to compile and even install
it.)

If you have counter-examples, I'm all ears.

That said, I'll grant there are times cross compiling can't be avoided. You
need to cross compile to reproducibly _get_ a native environment starting
from an arbitrary host. You may not have an emulator or powerful enough
native hardware to build natively (the xylinx microblaze comes to mind).
Setting up a native build environment may be overkill to build something like
a statically linked version of the linux
kernel's "Documentation/networking/ifenslave.c" that you're adding to an
existing filesystem supplied by the device manufacturer. (Or you just may
have many years of experience doing it and prefer that approach because it's
what you're comfortable with. :)
Post by Yann E. MORIN
So, we're stuck. Or are we?
( Just a side note before I go on. I do not use the same meanings as you
build (machine) : the machine that _builds_ stuff
host (machine) : the machine that _runs_ stuff
target (machine): the machine the stuff _generates_ code for
Um, I'm confused: are these the meanings I use for these words, the meanings
you use for these words, or the meaning the GCC manual ascribes to these
words?
Post by Yann E. MORIN
For a compiler (binutils, gcc, gdb), the three make sense, but for other
programs (glibc, bash, toybox...), only the first two do.
)
Tangent: not an issue specific to crosstool-ng. :)

Actually even for a compiler like tinycc there are only two interesting ones.
The fact that gcc did three of them is because gcc is more complicated than
it actually needs to be, because it thinks it must to rebuild itself under
itself.

Note that at compile time, you the compiler tells your C code what host you're
building on via pre #defined __arm__ and __i386__ and similar. (I.E. the
old "$ARCH-gcc -dM -E - < /dev/null" trick.) Your compiler already knows
what host you're building on, and you can #include <endian.h> to query that
and even do conditional #includes of different .c files if you need different
code for different hosts. (The more complicated little endianness detection
dance busybox does in include/platform.should detect endianness for BSD and
MacOS X and even digital unix. During the compile, in a header file.)

So you should _never_ need to specify --build, because it can autodetect it.
It just doesn't.

More generally, most portable programs should care all that much about the
host they're running on, and that includes compilers. This is true for the
same reason that if the host processor type affects the output when you run a
gif to jpeg converter, something is wrong. If a program that converts known
input files into known output files doesn't perform exactly the same function
when it's running on arm as when it's running on x86, then it's _broken_.
Compilers are fundamentally just programs that convert input files (C code)
into output files (.o, .a, .so, executable, etc). (Yeah, they suck in
implicit input like libraries and header files from various search paths, but
a docbook->pdf converter sucks in stylesheets and fonts in addition to the
explicit xml input, and nobody thinks there's anything MAGIC about it.) The
fact those output files may (or may not) be runnable on the current system is
irrelevant. Things like sed and awk can produce shell scripts as its output,
which are runnable on the current system if you set the executable bit. It's
not _special_.

The distinction between "--host" and "--target" exists because gcc wants to
hardwire into its build system the ability to do a canadian cross. You could
just as easily do this by hand: first build a cross compiler on your current
machine targeting "--host", and then cross compile with that to building a
new compiler this time configured to target "--target".

The distinction between "--build" and "--host" exists because the gcc build
wants to rebuild itself with itself, even when doing a canadian cross. It
doesn't want to convert source code into an executable using the available
compiler. It doesn't trust the available compiler. It wants to build a
temporary version of itself, and then build a new version of itself with that
temporary version to PURIFTY itself from the VILE TAINT of the host compiler,
and then build itself a _third_ time JUST TO BE SURE.

But why it can't autodetect --build even then, as described above, is a
mystery for the ages...

I personally don't humore gcc. A compiler is just a program, and it should
build the way normal programs do, and if I have to hit its build system with
a large rock repeatedly to make it agree with me, I'm ok with that.
Post by Yann E. MORIN
One of the problem I can see with FWL is how you end up with a firmware
image that runs on the target platform (say my WRT54GL ;-) ), and contains
only what is needed to run it, without all that native build environment
that will definitely not fit in the ~7MiB available in there.
Ok, I should have been more clear:

My project builds cross compilers. I'm mostly trying to compare the cross
compilers I build against the cross compilers your system builds, and what's
involved to get a usable cross compiler out of each system. (The fact that
my cross compilers are produced as a side effect and yours are the focus of
your project is a side issue, although I may have allowed myself to get
distracted by it.)

If you just want a cross compiler and to take it from there your self, you can
just grab the cross compiler tarball the build outputs and use it to build
your own system. That's why it's tarred up in the first place.

I admit I've been a lazy and said "run ./build.sh", which does extra stuff,
instead of saying "run download.sh, host-tools.sh, and cross-compiler.sh, in
that order. The second two take the target $ARCH as an argument, the first
one doesn't." But that's because I'm not really trying to teach you how to
use my build system, I'm just using it as an example of how creating a cross
compiler can be simplified to the point where it can more or less be elided
in passing. I could trivially make a shell script wrapper that does that for
you, or teach ./build to take a "--just-cross" command line argument. I just
haven't bothered.

(To answer your actual question, if you're not interested in building a new
system in the native environment under qemu, you can always
do "NATIVE_TOOLCHAIN=none ./mini-native.sh mipsel" and then add more stuff to
the bare busybox+uClibc directory yourself before
running "SYSIMAGE_TYPE=squashfs ./system-image.sh mipsel", although the
bootloader is still your problem. A slightly cleaner way to do it would be
to create a hw- target fro the wrt64gl, see hw-wrt610n for an example.
Several users have also modified their local copy of mini-native.sh to add
extra packages they want to build. But that's a tangent.)
Post by Yann E. MORIN
1- you need correct build tools, of which a correctly isolated toolchain
2- you build your packages and installs them in a rootfs/ directory
(rootfs/ will contain only your packages instaleld files, and is missing
the toolchain libs)
3- you use a copy of rootfs/ which you populate with libs from the
toolchain 4- you use that populated copy to build your firmware images
Of course, if your packages are not cross-compile friendly, you may have
problems. But nowadays, most common packages do cross-compile neatly.
I've seen several large scale cross compiling efforts, from Timesys's
TSRPM-based build system through Gentoo Embedded. They all tend to top out
at the same ~600 packages that can be made to cross compile with enough
effort. (Although each new version of these packages tends to subtly break
stuff that _used_ to work on various targets. For example, in the past year
Python shipped a release or two that didn't work out of the box on mips and
Perl broke on a couple non-x86 targets. Little obscure packages like that,
which obviously nobody really uses...)

The debian repository has somewhere north of 30,000 packages. So somewhere
under 2% of the available packages support cross compiling (and a _lot_ of
effort goes into making even that much continue to cross compile as each new
versions comes out), which is why I don't consider it a general solution.

Luckily, most embedded systems are happy restricting themselves to this
existing subset.
Post by Yann E. MORIN
I have seen only a few of them requiring carefully crafted ./configure
options or a few patches here and there (ltrace is such a sucker).
For the records, I have some experience in that field as well ;-), as I've
been doing exactly this stuff for the past for years as my day-time job,
and I've played with LFS and cross-LFS for the previous three years or so.
Note: crosstool-NG was *not* written at mu day-time job, but on my own
spare time (which gave some frictions here at home from time to time...
FWL is my hobby project too. I've gotten some sponsored time to work on it
over the years, but altogether that's maybe 10% of the total time I've put
into it.
Post by Yann E. MORIN
Post by Rob Landley
so I
have some experience here too.
And, I do acknowledge your experience and your technical merit.
You know it, let's others know it as well. :-)
I'd like to be clear that I'm not denigrating your experience or expertise
here either. Your project works and people use it. I'm just saying I either
wouldn't have done it that way, or don't see why you did it that way, and you
_did_ ask for details. :)
Post by Yann E. MORIN
Post by Rob Landley
My project is carefully designed in layers so you don't have to use the
cross compilers I build. It should be easy to use crosstool-ng output to
build the root filesystems and system images that the later scripts
produce. (How easy it actually is, and whether there's any benefit in
doing so, is something I haven't really looked into yet.) The point is
the two projects are not actually directly competing, or at least I don't
think they are.
The main goals differ, but the underlying reason is the same: be able to
build stuff that will run on an alien machine. crosstool-NG is limited to
building the required tools (actual compiler, plus a few debug utilities),
while FWL aims at building a native build environment.
Tangent.

I think that cross compiling will continue to be hard in general, even after
you've got a working compiler, so my goal is to bridge to a different native
build environment. (I.E. Getting a reliably working cross compiler _is_ a
hard part of cross compiling, but it's not the _only_ hard part.)

That said, some people want to tackle the hard part for themselves, so giving
them a known working cross compiler saves a huge amount of hassle. (It can
take months to learn how to build one yourself, since there's so many subtly
_wrong_ ways to do it.) And if all you're building for the target is a
couple of static "hello world" binaries, cross compiling's quite reasonable.

It doesn't scale very well, and breaks easily, but assuming you _are_ doing it
(which is what this list is about)...
Post by Yann E. MORIN
Post by Rob Landley
I came at it from a different background. I was playing with Linux From
Scratch almost from the beginning,
LFSer as well in the 2001-2004 era. Went as far as using it as my daily
workstation using KDE. Yeah, I was *that* insane at the time. But taught
me a whole lot in the end.
Tangent.

I generally had my hands full building server stuff, and didn't play with x11
much. (I built it a couple times, but I needed my laptop to _work_,
including PDF and audio support and wireless networking and so on.)

Keep meaning to poke at building x.org from source, but they split it into so
many different pieces that I'd need to get something like 10 packages working
to run an xterm...
Post by Yann E. MORIN
[--SNIP the genesis of FWL--]
Post by Rob Landley
I looked at crosstool circa 2004-ish, but was turned off by the way it
replicated huge amounts of infrastructure for every single dot release of
every component. (I remember it having separate patches, separate build
scripts, and so on. I don't even remember what it did per-target.)
How can you avoid having one patchest for each version of each component?
Ok, here's a design decision we disagree on. Is there a significant advantage
to supporting multiple versions of the same components, and does it outweigh
the downsides?

In general, if a new package version doesn't do something an old version did
(including "be small enough" for the embedded parts), the new version should
probably be _fixed_. (It's certainly something you want to know about.)

Fragmenting the tester base isn't useful. There's never enough testing to
find all the bugs, or enough developers to implement everything you want to
do, so the next best thing you can do is have all your testers testing the
same thing and focus the development effort to fixing that one thing.

Fixes can only be be pushed upstream against the _current_ version of
packages. (Testing the current version of rapidly changing projects with
major unfinished features, such as uClibc and the linux kernel, is especially
useful, because they're the most likely to cause strange subtle breakage in
some obscure package or other.) Testing that early while the developers
still remember what they changed recently is a good thing.

Having the same behavior across different targets allows automated regression
testing that isn't just a laundry list of special cases.

What are the corresponding advantages of supporting multiple versions?
Post by Yann E. MORIN
Of course, FWL uses only the latest versions availabe (which is wrong, it
still uses gcc-4.1.2 for philosophical reasons)
Tangent.

I keep meaning to go to 4.2.1 but every time I hit a bug or a missing feature,
I test 4.2.1 to see if it fixes it, and I have _never_ found any bug that
4.2.1 fixed or new feature that 4.2.1 supports which 4.1.2 doesn't. I should
just bite the bullet and switch anyway, but so far I just haven't found an
excuse other than "higher number". (Not even "this supports a hardware
target that the other one doesn't". I keep _expecting_ to find one, but I've
been looking on and off for two years now. I'd have just upgraded anyway if
I didn't have to rewrite the armv4 soft float patch for the new version...)
Post by Yann E. MORIN
Post by Rob Landley
http://landley.net/writing/docs/cross-compiling.html
But, of all packages I've been using, most are *now* cross-compile friendly
(with some notable exceptions) and the ones that gave me the most headaches
where the ones coming from company who don't grok the terms "open" and
"free as in speech". *Those* were real suckers.
Tangent.

I bump into packages that don't want to cross compile all the time, and I
already have people using my build system to compile packages I'm not
personally messing with.

For example, currently uClibc++ is getting quite a workout from Vladimir
Dronnikov at http://uclibc.org/~dvv/ building cmake and nmap and so on
against it, and pushing bug reports upstream to Garrett. (Mark also figured
out how to make uClibc++ work on arm eabi over the weekend, which required
another largeish patch.)

Apparently our experiences differ here.
Post by Yann E. MORIN
Post by Rob Landley
Post by Yann E. MORIN
a- ease overall maintenance
b- ease configuration of the toolchain
c- support newer versions of components
d- add new features
e- add alternatives where it was available
Can't really argue with those goals, although mostly because they're a
bit vague.
What do you mean, "vague" (I understand the word, it's the same in french)?
The question is really: what in the above list qualifies as "vague"?
Tangent.

The important part of that sentence was "Can't really argue with those goals",
and the rest of what I'm about to say here is really irrelevant, but since
you asked (feel free to skip this bit, it's not real objections):

I was confused by "add alternatives where it was available".. (Alternate
package versions? Alternate features? Alternate configuration methods?) I
myself tend to lean towards the "do one thing and do it well" approach, so I
try to make sure each alternative is justified and worth bothering the users
to make a decision about. To me, too _many_ alternatives means your project
isn't well-focused. From a user interface perspective, I tend to expect the
default response to any "Now what do I do?" question to be "stop bothering me
and get back to work". (You have to let them override/customize the default
behavior if they think you're doing it _wrong_, but pestering them about it
up front isn't necessarily helpful. Could be a stylistic difference here.)

You state "ease overall maintenance" but then go on to explicitly say "support
newer versions of components" and "add new features" separately... so what's
left in maintenance that those two don't cover? (Make it easy to fix bugs,
maybe?) That confused me a bit on the first reading too.

My answer to the rest is to question "how". Ease configuration... how? I
chose to ease configuration by having as little of it as possible and making
what there was completely optional, you chose to ease configuration by making
it very granular and doing a configuration menu GUI with nine sub-menus.
Both presumably supports this same goal in completely opposite ways, which
means the goal itself seems a bit nebulous to me because it doesn't
define "easy".

Thus the specific design choices were likely to be more interesting, and I
expected them to come up later in the same post, so I preferred to argue with
them when I got to them.

Again, you asked. :)
Post by Yann E. MORIN
Post by Rob Landley
My current build system has very careful boundaries.
Call it an API?
No, what I'm getting at is different from an API. Read Brian Kernighan's 1983
usenix paper (often called "Cat -v considered harmful").

Intro here: http://harmful.cat-v.org/cat-v/
Full paper here: http://harmful.cat-v.org/cat-v/unix_prog_design.pdf

These days we'd use the phrase "feature creep" in the discussion. When
designing my project I was very clear on what it would _not_ do.

Crosstool is a lot better than some about defining its boundaries. You
_didn't_ get sucked into becoming an entire distro generator like 15 others
out there.

User interface is a separate (albeit related) issue.
Post by Yann E. MORIN
Post by Rob Landley
This is why my current system is very carefully delineated. I know
exactly what it does NOT do. It builds the smallest possible system
capable of rebuilding itself under itself. I.E. it bootstraps a generic
development environment for a target, within which you can build
natively. It has to do some cross compiling to do this, but once it's
done you can _stop_ cross compiling, and instead fire up qemu and build
natively within that.
Except that it in fact does cross-compiling, as it is escaping the qemu
via distcc to call the cross tools on the build machine. :-/
My goal is to eliminate the _need_ for anybody else to do cross compiling, not
the _ability_. :)

Tangent:

The distcc acceleration trick doesn't require any of the packages being built
to be cross-aware, thus it doesn't restrict you to the ~600 packages that are
already cross-aware. As far as the packages being built are concerned,
they're building fully natively. (And in theory, distcc could be calling out
to other qemu instances, or native hardware. The fact that it _isn't_ is
purely an implementation detail.)
Post by Yann E. MORIN
Post by Rob Landley
What
I'm mostly disagreeing with is your assumptions.
- the goal
- the assumptions made to reach that goal
Both make the "why". What I came up with makes the "how".
As for the goal, I wanted to be able to build dependable
(cross-)toolchains. On the assumptions, I saw that I could not rely on
binutils/gcc/glibc/... to build easily (which is quite the case), and that
I need a kind of framework to make them build seamlessly.
Tangent:

One big difference between our projects' goals (and strangely enough I wound
up siding with buildroot on this one) is that I chose to only build uClibc,
while your build offers glibc cross compiled to various targets.

(This isn't a criticism, it's a difference in scope. You project don't build
native system images as part of its mandate, mine doesn't build glibc. From
a certain point of view, the C library is part of the target platform, as
much as the processor, endianness, or the OABI/EABI decision on arm, so if I
_was_ going to support it I'd just add extra targets. From the vantage of
the build scripts it's just one more package, and "whether or not to support
multiple versions of the same package" thing doesn't come up when it's not
the same package.)
Post by Yann E. MORIN
No we can discuss this, but the original discussion was not to see if the
"how" was the best way to answer the "why".
Post by Rob Landley
In my case, I separated my design into layers, the four most interesting
download.sh - download all the source code and confirm sha1sums
cross-compiler.sh - create a cross compiler for a target.
crosstool-NG stops here. And strives at offering more options than your
solution.
A great many people are stuck with a specific version of any one or more
or the components (gcc/glibc/...) for historical reasons I am not ready
to discuss. Having a tool that can not cope with earlier versions is
not helpfull.
Yes and no. If they stuck with earlier versions of the components, why didn't
they stick with earlier versions of the cross compiler (or earlier versions
of the cross compiler build system that build cross compilers with those
components back when they were current)? Why would you upgrade some packages
and be "stuck" with others?

Binutils and gcc are something of a special case because they don't affect the
resulting target system much. Code built with gcc 4.3 and 4.1 should be able
to seamless link together. (If it doesn't, there's a bug. Yes, even C++
according to the ABI, although I wouldn't personally trust it.)

You can upgrade the installed kernel without changing the kernel headers.

Ah, I get it. One of the things I hadn't noticed about your design before now
is the assumption that you _won't_ be building a new system to install, but
that you must build a cross compiler that matches an existing binary image,
to which you'll incrementally be adding packages.

That's a lot harder task than the one I chose to deal with, and explains
rather a lot of the complexity of your build system.

Hmmm, one of the reasons I was uncomfortable with your build system is I
couldn't quite figure out the goals of the project, and that just helped a
lot. The _strength_ of crosstool is if you need to supplement an existing
root filesystem without rebuilding any of the parts of it that are already
there. In that case, you may need fine-grained selection of all sorts of
little details in order to get it to match up precisely, details which would
be completely irrelevant if your goal was to just "build a system for this
target".

Ok, that makes a lot more sense now.
Post by Yann E. MORIN
Post by Rob Landley
mini-native.sh - build a root filesystem containing a native toolchain
system-image.sh - package the root filesystem into something qemu can boot
Those two _use_ the above, htey are not part of it.
Eactly. :)
Post by Yann E. MORIN
Post by Rob Landley
Post by Yann E. MORIN
The first step was to split up this script into smaller ones, each
dedicated to building a single component. This way, I hoped that it
would be easier to maintain each build procedure on its own.
I wound up breaking the http://landley.net/code/firmware/old version into
a dozen or so different scripts. My earlier versions the granularity was
too coarse, in that one the granularity got too fine. I think my current
one has the granularity about right; each script does something
interesting and explainable.
So are each scripts in scripts/build/ : they are dedicated to building a
single piece of the toolchain, and each can be replaced without the others
noticing (or so it should be the case).
The problem I encountered was that doing this made it significantly more
difficult to follow the logic, especially the build prerequisites. (One of
the harder parts is figuring out what order stuff needs to be built in. The
cross-gcc stuff winds up building a lot of things twice to get a clean
toolchain.)

That said, you're not really trying to avoid this kind of complexity, because
being fiddly and granular seems to be the point. (I still think you've gone
overboard in a few cases, there's still reason to care about the -pipe option
of gcc.)
Post by Yann E. MORIN
Post by Rob Landley
Notice there is _nothing_ target-specific in there. All the target
information is factored out into sources/targets. The build scripts
_do_not_care_ what target you're building for.
That's the same in crosstool-NG: the configuration and the wrapper scripts
set up some variables that the build scripts rely upon to build their stuff.
Although which scripts get called in which order is dependent on
your .config...
Post by Yann E. MORIN
The target specific configuration is generic, but can be overidden by
target-specofic code (eg. ARM can overide the target tupple to append
"eabi" to the tupple if EABi is enabled, and to not add it if not enabled;
this can *not* be done in a generic way, as not all architectures
behave the same in this respect).
It can be fairly generic, but the target tuple's varies per target no matter
what you do. Some targets (ala blackfin the first time I tried it) won't
give you a -linux and have to do -elf instead, yet Linux builds on 'em.

One dirty trick I pulled is having the _host_ tuple be `uname -m`-walrus-linux
and the target tuple be variants of $ARCH-unknown-linux. Since "unknown !=
walrus", it never did the "oh you're not really cross compiling, lemme short
circuit the logic" thing which used to screw up uClibc on a gcc host.
They've since patched that specific case by expecting uClibc in the tuple
(even though I don't build the C library until _after_ I build the compiler
so technically that decision hasn't been made yet), but in general I like the
build to continue to use the one codepath I've most thoroughly tested and not
drastically change its behavior behind my back. A variant of the "all
targets should behave as similarly as possible" thing.
Post by Yann E. MORIN
Post by Rob Landley
1) Why do we have to install your source code? The tarball we download
from your website _is_ source code, isn't it? We already chose where to
extract it. The normal order of operations is "./configure; make; make
install". With your stuff, you have to install it in a second location
before you can configure it. Why? What is this step for?
- it requires some pre-existing stuff in your environemnt, hence the
"./configure" step
- you have to build it, hence the "make" step (although this is only
sed-ing a few place-holders here and there)
- you have to install it to use it, hence the "make install" step
- you add its location/bin to the PATH
But the result is source code, which you then compile to get _another_ binary.
If it was "like any other package" you would download, ./configure, make,
make install, and the result would be the binary you actually run (I.E. the
cross compiler).

Instead you download the package, configure it, make, install, and then you
configure it AGAIN, make AGAIN, and install AGAIN.

I.E. you need to install your source code before you build it. I find that
odd. (There is the ./configure --local thing, but why isn't that the
default?)

Once upon a time linux systems tended to have a common copy of the linux
source code, installed in /usr/src/linux. They did that by extracting the
tarball into that location.
Post by Yann E. MORIN
Then you can run ct-ng, from anywhere and you build your toolchain.
I mat repeat myself, but do you expect to build your own program in the
gcc source tree?
Yes, if I could. (I admit gcc and binutils check for this and error out,
never did figure out why.) I build everything else in its own source tree,
including the kernel, uClibc, busybox, and so on. This is actually the
default way to build most packages, the FSF ones are unusual. (That's part
of the ./configure; make; make install thing.)

I tend not to see FSF designs as a good example of anything. As Linus says
near the start of the kernel's Documentation/CodingStyle:

First off, I'd suggest printing out a copy of the GNU coding standards,
and NOT read it. Burn them, it's a great symbolic gesture.
Post by Yann E. MORIN
Post by Rob Landley
2) Your configuration menu is way too granular. You ask your users
whether or not to use the gcc "-pipe" flag. What difference does it
make? Why ask this? Is there a real benefit to bothering them with
this, rather than just picking one?
I will answer this in an answer to your other post, if you will.
I'll catch up. Might take it off list if people continue to get annoyed by
the discussion being too long, though.
Post by Yann E. MORIN
Post by Rob Landley
I want to do a more detailed critique here, but I had to reinstall my
laptop a couple weeks ago and my quick attempt to bring up your
./configure --prefix=/home/landley/cisco/crosstool-ng-1.3.2/walrus
Computing version string... 1.3.2
Checking for '/bin/bash'... /bin/bash
Checking for 'make'... /usr/bin/make
Checking for 'gcc'... /usr/bin/gcc
I note that if any of these aren't there, the build will die very early on
with an error that it couldn't find the appropriate command, so explicitly
checking for them seems a bit redundant. (Doesn't actually _hurt_, but it
seems unnecessary. Judgement call, that. Possibly a matter of personal
taste.)
Post by Yann E. MORIN
Post by Rob Landley
Checking for 'gawk'... not found
Bailing out...
I note that Ubuntu defaults to having "awk" installed, why you _need_ the
gnu version of specifically is something I don't understand.
I could not make my awk script work with mawk, which is the default under
the obscur distribution I am using (Debian, I think). So I fallback to
installing gawk. But that was an *enourmous* error. Its main use it to try
to build a correct tsocks setup given the options. *That* is purely insane.
It should be going away. No, it /should/ not be going away. It *is* going
away.
The fact that I shoehorned proxy settings in crosstool-NG is an error,
granted, but because I'm using GNU extensions in there, so I must check
for GNU awk.
My design approach is to ruthlessly minimize complexity. If I'm not sure
something is going to be there on all systems, I try to figure out if I can
do without it or build it from source.

The approach you've taken is to require the user to build up their system to a
minimum set of requirements.
Post by Yann E. MORIN
Post by Rob Landley
For example, you require libtool. Why are you checking for libtool?
crosstool-NG itself does not require libtool. The components that it builds
will use it if they find it. But if the version is too old, the build will
break (I think it was mpfr at fault there, but am not sure), instead of
simply ignoring it.
So I have also to ensure a correct environment for the components I build.
That's a good reason for checking that the libtool that's installed isn't too
old, but not a good reason for failing if libtool isn't there at all (which
as I understand it would still mean your cross compilers build correctly).

As I said, I trimmed the $PATH to remove everything that wasn't actually used.
Fairly draconian approach to making the build reliable, I know. :)
Post by Yann E. MORIN
Post by Rob Landley
I note
that libtool exists to make non-elf systems work like ELF, I.E. it's a
NOP on Linux, so it's actually _better_ not to have it installed at all
because libtool often screws up cross compiling.
But what if it *is* already installed?
Your test should be able to distinguish "bad version installed" from "not
installed at all".
Post by Yann E. MORIN
The end-user is feree to install
whatever he/she wants on his/her computer, no? It's just that I want to
check that the environment is sane before going on any further.
They could change it after you run ./configure. Downgrade it and install the
broken version, or upgrade to some new version you've never heard of that's
buggy. New changes to your environment can break things, fact of life. You
can move the sanity tests to the start of each build if that bothers you,
but "libtool is not installed" is not a bad thing in this context needing to
be fixed.
Post by Yann E. MORIN
The fact that libtool sucks is totaly irrelevant to the problem.
And yes, it sucks.
Post by Rob Landley
(In my experience, when a project
is designed to do nothing and _fails_ to successfully do it, there's a
fairly high chance it was written by the FSF. One of the things my
host-tools.sh does is make sure libtool is _not_ in the $PATH, even when
it's installed on the host.
Oh, come-on... ;-) My libtool is in /usr/bin. Do you want to remove
/usr/bin from my PATH? You'll end-up missing a lot of stuff, in this case.
I add that stuff to my path explicitly, knowing exactly what I need to build.

As I said, it's a draconian approach and I wasn't saying I expect other people
to be that extreme. :)
Post by Yann E. MORIN
OK, so now moving on to answer your other post(s)... Took me about two
hours trying to answer this one... :-(
Yeah, this has gotten really long really fast. Not entirely surprised that
happened when we both started talking about our big hobby projects. :)

I still have to reply to the second half of your first message... :)

Rob
--
GPLv3 is to GPLv2 what Attack of the Clones is to The Empire Strikes Back.

--
For unsubscribe information see http://sourceware.org/lists.html#faq
Thomas Petazzoni
2009-04-14 14:31:36 UTC
Permalink
Le Mon, 6 Apr 2009 22:11:03 +0200,
Post by Yann E. MORIN
Just my turn to rant a little bit ;-) For many, cross-compiling can't
be avoided. Running under an emulator is biased. Your ./configure
might detect some specifics of the machine it's running on (the
emulator) that might prove wrong on the real hardware, Or the other
way around, miss some specifics of the real hardware that the
emulator does not provide.
We happened to discuss Firmware Linux during the « Build Tools BOF » at
Embedded Linux Conference last week in San Francisco. For the audience,
Firmware Linux indeed looked like an interesting proof of concept, but
that proof of concept would *never* deprecate the need to normal
cross-compilers, normal cross-compiling and tools like Crosstool-NG.

Reason ? Just because Firmware Linux makes the assumption that a Qemu
version exists for each and every architecture in the world. It
basically makes Qemu a mandatory component to be able to build code
for a new architecture. Is Qemu available for NIOS ? For Microblaze ?
Not that I'm aware of.

Therefore, besides Firmware Linux's existence, tools like Crosstool-NG
will always be necessary, and well-maintained, simple, tools like
Crosstool-NG are more than welcome. Yann, I made quite an extensive
promotion of Crosstool-NG both during my talk on Buildroot and the BOF
on Build tools during last week conference.

Cheers,

Thomas
--
Thomas Petazzoni, Free Electrons
Kernel, drivers and embedded Linux development,
consulting, training and support.
http://free-electrons.com

--
For unsubscribe information see http://sourceware.org/lists.html#faq
Allen Curtis
2009-04-14 15:07:04 UTC
Permalink
Can we declare that this conversation is a "Rat Hole" and should be
terminated?
Post by Thomas Petazzoni
Le Mon, 6 Apr 2009 22:11:03 +0200,
Post by Yann E. MORIN
Just my turn to rant a little bit ;-) For many, cross-compiling can't
be avoided. Running under an emulator is biased. Your ./configure
might detect some specifics of the machine it's running on (the
emulator) that might prove wrong on the real hardware, Or the other
way around, miss some specifics of the real hardware that the
emulator does not provide.
We happened to discuss Firmware Linux during the « Build Tools BOF » at
Embedded Linux Conference last week in San Francisco. For the
audience,
Firmware Linux indeed looked like an interesting proof of concept, but
that proof of concept would *never* deprecate the need to normal
cross-compilers, normal cross-compiling and tools like Crosstool-NG.
Reason ? Just because Firmware Linux makes the assumption that a Qemu
version exists for each and every architecture in the world. It
basically makes Qemu a mandatory component to be able to build code
for a new architecture. Is Qemu available for NIOS ? For Microblaze ?
Not that I'm aware of.
Therefore, besides Firmware Linux's existence, tools like Crosstool-NG
will always be necessary, and well-maintained, simple, tools like
Crosstool-NG are more than welcome. Yann, I made quite an extensive
promotion of Crosstool-NG both during my talk on Buildroot and the BOF
on Build tools during last week conference.
Cheers,
Thomas
--
Thomas Petazzoni, Free Electrons
Kernel, drivers and embedded Linux development,
consulting, training and support.
http://free-electrons.com
--
For unsubscribe information see http://sourceware.org/lists.html#faq
--
For unsubscribe information see http://sourceware.org/lists.html#faq
Yann E. MORIN
2009-04-14 17:11:21 UTC
Permalink
Hello All!
Post by Allen Curtis
Can we declare that this conversation is a "Rat Hole" and should be
terminated?
I, as the OP, do solemnly declare this thread closed. ;-)
A lot as been said, whether on- or off-topic, and that is enough.

Thank you very to all participants for your input!

Regards,
Yann E. MORIN.
--
.-----------------.--------------------.------------------.--------------------.
| Yann E. MORIN | Real-Time Embedded | /"\ ASCII RIBBON | Erics' conspiracy: |
| +0/33 662376056 | Software Designer | \ / CAMPAIGN | ___ |
| --==< ^_^ >==-- `------------.-------: X AGAINST | \e/ There is no |
| http://ymorin.is-a-geek.org/ | _/*\_ | / \ HTML MAIL | v conspiracy. |
`------------------------------^-------^------------------^--------------------'


--
For unsubscribe information see http://sourceware.org/lists.html#faq
Rob Landley
2009-04-06 05:45:49 UTC
Permalink
Post by Yann E. MORIN
2.b) Ease configuration of the toolchain
In the state, configuring crosstool required editing a file containing
shell variables assignements. There was no proper documentation at what
variables were used, and no clear explanations about each variables
meaning.
My response to this problem was to write documentation.

Here's my file containing every configuration value used by build.sh or one of
the scripts it calls:

http://impactlinux.com/hg/firmware/file/tip/config

Each of those variables defaults to blank. You only set it if you want to
change that default value. There's a comment right before it explaining what
it does. You can set them in your environment, or set them in that file,
either way.
Post by Yann E. MORIN
The need for a proper way to configure a toolchain arose, and I quite
instinctively turned to the configuration scheme used by the Linux
kernel. This kconfig language is easy to write. The frontends that
then present the resulting menuconfig have limitations in some corner
cases, but they are maintained by the kernel folks.
While I've used kconfig myself, there's an old saying: "If all you have is a
hammer, everything looks like a nail".

The failure mode of kconfig is having so much granularity that your users wind
up being the guy standing at the register at Starbucks going "I just want a
coffee!" (Not sure if that reference translates.)

Ironically, kconfig is only really worth using when you have enough config
options to bother with it. When you have small numbers of config options
that are usually going to be off, I prefer environment variables (with a
config file in which you can set those in a persistent manner) or command
line options. Since you can set an environment variable on the command line,
ala:

FORK=1 ./buildall.sh

I lean towards those. Possibly a matter of personal taste...
Post by Yann E. MORIN
Again, of with the build scripts, above, I decided to split each components
configuration into separate files, with an almost 1-to-1 mapping.
I did that in an earlier version of my build scripts (the one available in
the "old" directory).

But the thing is, doing that assumes each build component is big and evil and
fiddly, and that makes them tend to _become_ big and evil and fiddling. For
example, your "binutils.sh" is 119 lines. Mine's 16, including a comment and
two blank lines.

Ok, a more fair comparison would include both the cross and native binutils
builds (add another 15 lines for the native one, again with two blank lines
and a comment), plus the download.sh call to the download function for
binutils (6 lines, of which only three are needed: one setting the URL, one
setting the SHA1SUM, and one calling download.)

So 37 lines vs your 119.

The other thing is that having the build be in one file makes the
relationships between components very obvious. An extremely important piece
of information in Linux From Scratch is what _order_ you have to build the
packages in, since everything depends on everything else and the hard part is
breaking circular dependencies.

My cross-compiler.sh is a shell script, 150 lines long, that builds a cross
compiler. It builds and installs binutils, builds and installs gcc, adjusts
them into a relocatable form, installs the linux kernel headers, builds and
installs uClibc, creates a README out of a here document, makes a tarball of
the result, and then runs a sanity test on the newly created cross compiler
by building "hello world" with it (once dynamically linked and once
statically linked, and optionally runs qemu application emulation on the
statically linked one to see if it outputs "hello world" and returns an exit
code of 0).

That's it, 150 lines. Not big or complicated enough to break up. (Now
mini-native.sh is twice that size, and I've pondered breaking it up. But 322
lines isn't excessive yet, so breaking it up is still probably a net loss of
understandability.)

Getting back to menuconfig, since it _is_ so central to your design, let's
look at the menuconfig entries. I still have 1.3.2 installed here, which
starts with nine sub-menus, let's go into the first, "paths and misc
options":

The first three options in the first menu aren't immediately useful to a
newbie like me:

[ ] Use obsolete features
[ ] Try features marked as EXPERIMENTAL (NEW)
[ ] Debug crosstool-NG (NEW)

I dunno what your obsolete versions are, I don't know what your experimental
options are, and I dunno what debugging crosstool-ng does. I am not
currently qualified to make any decisions about them, because I don't know
what they actually control.

Looking at the help... the "obsolete features" thing seems useless? We've
already got menus to select kernel and gcc versions, this just hides some of
those versions? Why? (Shouldn't it default to the newest stable version?
If it doesn't, shouldn't it be _obvious_ that the newest stable version is
probably what you want?)

Marking old versions "deprecated" might make a certain mount of sense.
marking them obsolete and hiding them, but still having them available, less
so.

Similarly, the "experimental" one seems useless because when you enable it the
experimental versions already say "EXPERIMENTAL" in their descriptions
(wandered around until I found the binutils version choice menu and looked at
it to be sure). They're marked anyway, so why is an option to hide them an
improvement?

As for the third, wasn't there a debug menu? Why is "Debug crostool-NG" in
the paths menu? (Rummage, rummage... Ah, I see, the debug menu is a list of
packages you might want to build and add to the toolchain. Ok, sort of makes
sense. Still, the third thing a newbie sees going through in order as
a "very very expert" option. Moving on...)

() Local tarballs directory (NEW)
(${CT_TOP_DIR}/targets) Working directory (NEW)
(${HOME}/x-tools/${CT_TARGET}) Prefix directory (NEW)

Most users aren't going to care where the local tarballs directory is, or the
working directory. The "prefix directory" is presumably different from where
we just installed with --prefix. I suppose it's nice that you can override
the defaults, but having it be one of the first questions a user's posed with
when going through the options in order trying to configure the thing isn't
really very helpful. It's not my problem, just _work_. (I also don't know
what CT_TOP_DIR and CT_TARGET are, I'd have to go look them up.)

For comparison, my system creates a tarball from the resulting cross compiler,
and leaves an extracted copy as "build/cross-compiler-$ARCH". You can put
them wherever you like, it's not my problem. They're fully relocatable.

On the build front, if you want one of the directories (such as "build"
or "packages") to live somewhere else, move it and put a symlink. I didn't
bother to document this because I expected people who think of it...

[*] Remove documentation (NEW)

Nice, and possibly the first question someone who _isn't_ a cross compiler
toolchain developer (but just wants to build and use the thing) might
actually be interested in.

Your ./configure still requires you to install makeinfo no matter what this is
set to. You have to install the package so this can delete its output?

Wouldn't it be better to group this with a "strip the resulting binaries"
option, and any other space saving switches? (I'm just assuming you _have_
them, somewhere...)

[*] Render the toolchain read-only (NEW)

This is something the end user can do fairly easily for themselves, and I'm
not quite sure what the advantage of doing it is supposed to be anyway. In
any case it's an install option, and should probably go with other install
options, but I personally wouldn't have bothered having this option at all.

[ ] Force downloads (NEW)

I noticed your build doesn't detect whether or not the tarballs downloaded
properly. I hit this the first time I ran crosstool-ng, when I ctrl-c'd out
of what it was doing when I got no progress indicator for what seemed like an
unreasonably long time, and on the second attempt it died because the tarball
it had halfway downloaded didn't extract right. (Took me a little while to
figure out how to fix that.)

Forcing re-downloads every build puts unnecessary strain on the mirrors, and
seems a bit impolite. (Plus your re-download can time out halfway through if
the net hiccups.) But the alternative you've got is your infrastructure
won't notice corrupted tarballs other than by dying.

What my download.sh script does is check the sha1sum of any existing tarball,
keep it if it's correct, and automatically redownload it if the file doesn't
exist or the sha1sum doesn't match. (That's also a quick and dirty check
that the mirrors we're downloading from didn't get hacked, but that's just a
fringe benefit.)

Mine also falls back to a series of mirrors, most notably
http://impactlinux.com/fwl/mirror (which covers the "but you're not mirroring
it!" complaints of the FSF in case they ever decide that I'm a commercial
user not covered by section 3C of GPLv2, and decide to pull a mepis on me.
Not that this is a primary consideration, but I _do_ offer prebuilt binaries
of each toolchain for download.) So if one of the websites is temporarily
down, or your wget dies halfway through due to a router reboot and the
resulting binary is truncated so the sha1sum is wrong, it still has a chance
to get the file without breaking the build.

The mirror list is in the file download.sh in case people want to edit it and
add their own, and I also have an environment variable you can set, ala:
PREFERRED_MIRROR=http://impactlinux.com/fwl/mirror

Which will be checked before the initial download location, so if you have a
local web server on your LAN it can download stuff from there and never go
out to the net for it. But you never HAVE to set that variable, and aren't
required to know it exists.

The takeaway here is I don't like halfway solutions. If a problem's worth
fixing, it's worth fixing thoroughly. Otherwise, don't address it at all and
let the user deal with it if they care to.

[ ] Use a proxy (NEW) --->

Wow, are these still used in 2009? Ok? (It just never came up for me...)

[ ] Use LAN mirror (NEW) --->

I mentioned PREFERRED_MIRROR above, and the fact that the mirror I setup is
already in the default list as a fallback.

In the sub-menu this options, why do you have individual selections instead of
just having 'em provide a URL prefix pointing to the directory in which to
find the packages in question? You already know the name of each package
you're looking for...

(10) connection timeout (NEW)

This is an implementation detail. Users should hardly ever care.

My system uses wget instead of curl (because wget is in busybox and curl
isn't). The actual invocation in sources/functions.sh line 189 (shell
function "try_download") is:

wget -t 2 -T 20 -O "$SRCDIR/$FILENAME" "$1" ||
(rm "$SRCDIR/$FILENAME"; return 2)

That's 2 attempts to download, timeout of 20 seconds. (And if wget exits with
an error, zap the partial download.)

Since it's a shell script, people are free to change those defaults by editing
the shell script. It seems uncommon enough to _need_ to do this that making
a more convenient way to do it didn't seem worth the extra complexity the
user would be confronted with to configure the thing. (I.E. if I keep the
infrastructure as simple as possible, the user should be able to find and
edit the wget command line more easily than finding and changing a
configuration option would be.)

As a higher level design issue, It would have been easier for me to implement
my build system in python than in bash, but the point of doing it in bash is
it's the exact same set of commands you'd run on the command line, in the
order you'd run them, to do this yourself by hand. So to an extent the shell
scripts act as documentation and a tutorial on how to build cross compilers.
(And I added a lot of #comments to help out there, because I _expect_ people
to read the scripts if they care about much more than just grabbing prebuilt
binary tarballs and using them to cross compile stuff.)

[ ] Stop after downloading tarballs (NEW)

This seems like it should be a command line option. It's a bit awkward that
if you just want to download the tarballs, you go into menuconfig, switch
this on, run the build, go back into menuconfig, and switch this off again.

Mine just has "./download.sh", which you can run by itself directly.

[ ] Force extractions (NEW)

Ah, you cache the results of tarball extraction too. I hadn't noticed. (I
hadn't bothered to mention that mine's doing it because it's just an
implementation detail.)

This is one of the things my setupfor function does: it extracts source into
build/sources, in a subdirectory with the same name as the package. Then
when it needs to actually build a package, it creates a directory full of
hard links (cp -lfR sourcedir targetdir) to the source, which is quick and
cheap.

(By the way, there's a SNAPSHOT_SYMLINK config option, which if set will do
a "cp -sfR" instead of "cp -lfR", creating symlinks instead of hard links.
This is noticeably slower and consumes a zillion inodes, which hard links
don't. But the _advantage_ of this is your build/sources directory can be a
symlink to a different filesystem than the one you're building on, possibly
on something crazy like NFS. So your extracted source code doesn't have to
live on your build machine, which is nice if you're building on strange
little mips or arm systems that have network access but only a ramfs for
local storage. Note that building on NFS sucks because the dentry cacheing
screws up the timestamp granularity make depends on, but having your source
code _symlinked_ from NFS while the filesystem you're creating all your temp
files in is local does not have such problems. The source remains "reliably
old enough" (unless you're build is crazy enough to modify its source code,
which should never happen).)

Oh, and the directory it saves under build/sources is just the package name,
minus the version number. (The trick to removing the version number is to
extract it into an empty directory, and then "mv * name-you-expect". That
breaks if the package creates more than one directory, but you _want_ the
script to stop if that happens because something is deeply wrong.) Removing
the version number from the cached source directory means that only
download.sh ever has to care about the version number, the build scripts
don't. So usually all you have to do to upgrade a package is change its
entry in download.sh and rerun the build. (Admittedly, sometimes you have to
fix things that break because the new version doesn't build the same way the
previous one did, but for sanely maintained packages that's not an issue the
majority of the time. Alas, most of the packages the FSF maintains are
insane, but I'm using the last GPLv2 releases of gcc, binutils, and make, and
using bash 2.05b because newer bash is mostly bloat without benefit, so it's
really uClibc, busybox and the kernel that get upgraded often.)

Again, I detect "good/stale" cached data via sha1sums. The extract function
(in sources/functions.sh) writes a file "sha1-for-source.txt" into the source
directory it extracts and patches. When it's run again on the same source,
it first checks the sha1sums in that file against the sha1sum of the package
tarball and the sha1sum of each patches that was applied to that package. If
they all match, then it keeps the existing source and returns success. If
they don't match, it does an rm -rf on the old directory (if any), extracts
the tarball, and applies all the patches to it in order (again, if any).

Note that the sequencing here is important: it doesn't append the sha1sum for
the tarball to the file until the tarball has successfully extracted, and it
doesn't append the sha1 for each patch until "patch" has returned success.
That way if the extract fails for some reason (possibly disk full) the next
call to extract will be able to tell that it's wrong, and will rm -rf the
junk and do it again.

Also note that we don't check the _contents_ of the directory, just the
sha1-for-source.txt file that says we _put_ source we were happy with there.
If the user comes along and makes temporary tweaks to this source for testing
purposes, we keep it until they're done testing, at which point they can
rm -rf that directory.

By the way, my project's equivalent of "make clean" is "rm -rf build", and the
equivalent of make distclean is "rm -rf build packages". I believe that's in
the documentation.html file, but it should probably also be in the README...

Added.

[*] Override config.{guess,sub} (NEW)

I consider autoconf/automake horrible abominations that have outlived their
usefulness, and they need to die and be replaced by something else. (As
does "make".) I believe I ranted about that in the OLS compiler bof
video. :)

I can sort of see this, but it's one of those "you really, really, really need
to know what you're doing, and you might be better off patching or upgrading
the package in question instead".

[ ] Stop after extracting tarballs (NEW)

In my case, you do "./download.sh --extract" which will download and extract
every downloaded tarball. If you run it twice, the second time it should
just confirm a lot of sha1sums and otherwise figure out it has nothing to do.

You can also "./download.sh && ./download.sh --extract" to get all the
networking stuff out of the way in one go, and then do all the CPU intensive
stuff. I like that because sometimes I have intermittent network access on
my laptop (I.E. about to move somewhere else in five minutes, and that place
has no net), so getting all the stuff that needs to talk to the network out
of the way up front is nice. *shrug* YMMV.

The reason I made --extract a command line option instead of an environment
variable is I can't think of a reason you'd ever want to run it twice in a
row. Environment variables have the advantage you'd want to set them
persistently, but in this case, that's pretty much nonsensical. Command line
options are designed for "this run only, do this thing differently".

In comparison, having this functionality controlled via menuconfig seems a bit
awkward to me.

(1) Number of parallel jobs (NEW)

My sources/includes.sh autodetects the number of processors and sets CPUS.
You can override it by setting CPUS on the command line. (I often
do "CPUS=1 ./build.sh x86_64" when something breaks so I get more
understandable error messages.)

In general, I try never to ask the user for information I can autodetect sane
defaults for, I just let them override the defaults if they want to.

(0) Maximum allowed load (NEW)

Ooh, that's nice, and something mine doesn't have. Personally I've never had
a clear enough idea of what loadavg's units were to figure out how it equated
to slowing down my desktop, and I've actually found that my laptop's
interactivity going down the drain is almost never due to loadavg, it's due
to running out of memory and the thing going swap happy with the disk pegged
as constantaly active. (The CPU scheduler is way the heck better than the
I/O scheduler, and virtual memory is conceptually horrible and quite possibly
_never_ going to be properly fixed at the theoretical level. You have to
accurately predict the future in order to do it right, that's slightly
_worse_ than solving the halting problem...)

(0) Nice level (NEW)

Again, trivial to do from the command line:

nice -n 5 ./build.sh

I had some scripts do this a while ago, but took it out again. I go back and
forth on whether or not it's worth it. It would be easy enough for me to add
this to config and have build.sh call the sub-scripts through nice, so you
could set this persistently. I just haven't bothered. (I have sometimes
reniced the processes after launching 'em, that's easy enough to do too.)

[*] Use -pipe (NEW)

Why would you ever bother the user with this? It's a gcc implementation
detail, and these days with a modern 2.6 kernel dentry and page caches you
probably can't even tell the difference in benchmarks because the data never
actually hits the disk anyway.

Have you actually benchmarked the difference?

[ ] Use 'ash' as CONFIG_SHELL (NEW)

A) I haven't got /bin/ash installed. Presumablly you need to install it since
the help says it's calling it from an absolute path?

B) If your scripts are so slow that you need a faster shell to run them,
possibly the problem is with the scripts rather than with the shell?

I admit that one of the potential weaknesses of my current system is that it
calls #!/bin/bash instead of #!/bin/sh. I agonized over that one a bit. But
I stayed with bash because A) dash is seriously broken, B) bash has been the
default shell of Linux since before the 0.0.1 release.

If you read Linus's book "Just for fun", he details how he wrote a terminal
program in assembly that ran booted from a floppy because minix's microkernel
serial port handling couldn't keep up with a 2400 bps modem without dropping
characters, and then he taught it to read/write the minix filesystem so he
could upload and download stuff, and then he accidentally turned it into a
unix kernel by teaching it to handle all the system calls bash needed so he
could rm/mv/mkdir to make space for his downloading without having to reboot
into minix. The shell was specifically bash. Redirecting the /bin/sh
symlink of ubuntu to something other than bash was the DUMBEST TECHNICAL
DECISION UBUNTU HAS EVER MADE. (Note that they still install bash by
default.)

Anyway, if your build scripts really are that slow you can autodetect
when /bin/dash or /bin/ash exists on the host and use those if they're there.
But personally I'd recommend making your build scripts do less work instead.

Maximum log level to see: (INFO) --->

I don't have a decent idea of what you get with each of these. (Yes, I've
read the help.)

In my build, it spits out all the output you get from the build, but you're
welcome to redirect it using normal unix command line stuff. I usually do

./build.sh 2>&1 | tee out.txt

And another trick is that each new section and package is announced with a
line starting with ===, so you can do:

./build.sh 2>&1 | grep ===

Or drop the 2>&1 part to see stderr messages as well.

Again, I gave them the output, and a couple of hooks to make wading through it
easier, but what they do with it is not really my problem.

Oh, I pull one other dirty trick, at the end of setupfor:

# Change window title bar to package now
echo -en "\033]2;$ARCH_NAME $STAGE_NAME $PACKAGE\007"

So you can see in the window title bar what architecture, stage, and package
it's currently building.

[ ] Warnings from the tools' builds (NEW)

Again, filtering the output of the build I leave to the user. They're better
at it, and 90% of the time they just want to know that it's still going, or
that it succeeded, or what error it died with.

But I can only _guess_ what they want, so I don't. In general, I try not to
assume they're not going to want to do some insane crazy thing I never
thought of, because usually I'm the one doing the insane crazy things the
people who wrote the stuff I'm using never thought of, so I sympathize.

[*] Progress bar (NEW)

I have the "dotprogress" function I use for extracting tarballs, prints a
period every 25 lines of input.

In general I've found "crud is still scrolling by in the window" to be a
decent indication that it's not dead yet. I mostly stay out of these kind of
cosmetic issues these days.

I used to change the color of the output so you could see at a glance what
stage it was, but people complained and I switched to changing the title bar
instead. Tells you at a glance where the build is, which was the point.
(You can still get the colors back with a config variable, but gnome can't
give you a real black background, just dark grey, and less than half as many
colors easy to read on a white background.)

[*] Log to a file (NEW)

Again, "./build.sh 2>&1 | tee out.txt". Pretty much programming 101 these
days, if you haven't learned that for building all the other source packages
out there, cross compiling probably isn't something you're ready for.

[*] Compress the log file (NEW)

./build.sh 2>&1 | tee >(gzip > out.txt)

And I'm at the end of this menu, so I'll pause here for now. (And you were
apologizing for writing a long message... :)

Rob
--
GPLv3 is to GPLv2 what Attack of the Clones is to The Empire Strikes Back.

--
For unsubscribe information see http://sourceware.org/lists.html#faq
Yann E. MORIN
2009-04-06 22:12:35 UTC
Permalink
Rob,
All,
OK, I think I'm coping with the back-log ;-)
Post by Rob Landley
Post by Yann E. MORIN
2.b) Ease configuration of the toolchain
In the state, configuring crosstool required editing a file containing
shell variables assignements. There was no proper documentation at what
variables were used, and no clear explanations about each variables
meaning.
My response to this problem was to write documentation.
Sure. But documentation is not all. Fixing and enhancing both the code
and the configuration scheme is also a way to achieve a better end-user
experience (Ahaha! I've wandered too much with the marketting dept lately).
Post by Rob Landley
While I've used kconfig myself, there's an old saying: "If all you have is a
hammer, everything looks like a nail".
Hehe! Show me a better and simpler way.
Post by Rob Landley
The failure mode of kconfig is having so much granularity
So what? Should I restrict what the end-user is allowed to do, based solely
on my own experience? If the possibility exists in the tools, then why
prevent the user from using (or not using) that? Configuring a compiler
has soooo many options, it drives you mad. Quite a lot can be inferred
from higher-level options, such as the arch name and the CPU variant, or
the ABI...
Post by Rob Landley
Ironically, kconfig is only really worth using when you have enough config
options to bother with it.
But there *are* a lot of options! There isn't even all that could be
available!
Post by Rob Landley
When you have small numbers of config options
that are usually going to be off, I prefer environment variables (with a
config file in which you can set those in a persistent manner) or command
line options. Since you can set an environment variable on the command line,
FORK=1 ./buildall.sh
I lean towards those. Possibly a matter of personal taste...
I think so. To configure stuff, I prefer having a GUI that is not vi, but
that is still simple. and the kconfig language and its mconf interpreter
seems quite fitted, even if it is not the best. But here is no alternative
that I'm aware of. Plus, it is quite well known thanks to the Linux kernel
using it.
Post by Rob Landley
Getting back to menuconfig, since it _is_ so central to your design, let's
look at the menuconfig entries. I still have 1.3.2 installed here, which
starts with nine sub-menus, let's go into the first, "paths and misc
The first three options in the first menu aren't immediately useful to a
[ ] Use obsolete features
[ ] Try features marked as EXPERIMENTAL (NEW)
[ ] Debug crosstool-NG (NEW)
I dunno what your obsolete versions are, I don't know what your experimental
options are, and I dunno what debugging crosstool-ng does. I am not
currently qualified to make any decisions about them, because I don't know
what they actually control.
Looking at the help... the "obsolete features" thing seems useless? We've
already got menus to select kernel and gcc versions, this just hides some of
those versions? Why? (Shouldn't it default to the newest stable version?
If it doesn't, shouldn't it be _obvious_ that the newest stable version is
probably what you want?)
OK, obsolete means that I can't afford ensuring that they still build.
Post by Rob Landley
Marking old versions "deprecated" might make a certain mount of sense.
marking them obsolete and hiding them, but still having them available, less
so.
OK, deprecated is much meaningful, I admit to it. Let's set for
s/OBSOLETE/DEPRECATED/, then.
Post by Rob Landley
Similarly, the "experimental" one seems useless because when you enable it the
experimental versions already say "EXPERIMENTAL" in their descriptions
(wandered around until I found the binutils version choice menu and looked at
it to be sure). They're marked anyway, so why is an option to hide them an
improvement?
EXPERIMENTAL in the prompt is just a string. A config knob makes the user
really aware that he/she's trying something that might break.

Plus, it marks the resulting config file as containing EXPERIMENTAL
features/versions/... and is easier to process.
Post by Rob Landley
As for the third, wasn't there a debug menu? Why is "Debug crostool-NG" in
the paths menu? (Rummage, rummage... Ah, I see, the debug menu is a list of
packages you might want to build and add to the toolchain. Ok, sort of makes
sense. Still, the third thing a newbie sees going through in order as
a "very very expert" option. Moving on...)
That's why the former in titled "Debug crosstol-NG", while the latter
is titled "debug facilities". Again, maybe the wording is wrong.
Post by Rob Landley
() Local tarballs directory (NEW)
(${CT_TOP_DIR}/targets) Working directory (NEW)
(${HOME}/x-tools/${CT_TARGET}) Prefix directory (NEW)
Most users aren't going to care where the local tarballs directory is, or the
working directory.
Most. Not all. And the help entries are here to tel the user whether
it wise to change.
Post by Rob Landley
The "prefix directory" is presumably different from where
we just installed with --prefix.
The help roughly says: the path were the toolchain is expected to run from.
Unfortunately, there is yet no support for DESTDIR, the place where the
toolchain will be installed, to allow installing out-of-tree. For the time
being, the DESTDIR is plainly / that is, the toolchain is expected to run
on the system it is built on. But that should eventually be fixed.
Post by Rob Landley
I suppose it's nice that you can override
the defaults, but having it be one of the first questions a user's posed with
when going through the options in order trying to configure the thing isn't
really very helpful. It's not my problem, just _work_.
Where should I install the toolchain? In the user's home directory?
This is indeed the default, but you are pestering against it!

If not in ${HOME}, where should I install the toolchain? In /opt ?
In /usr/local ? Bah, most useres don't have right access there.
Except root. But building as root is asking for problems!
Post by Rob Landley
(I also don't know
what CT_TOP_DIR and CT_TARGET are, I'd have to go look them up.)
docs/overview.txt is advertised in the top-level README.
Post by Rob Landley
For comparison, my system creates a tarball from the resulting cross compiler,
and leaves an extracted copy as "build/cross-compiler-$ARCH". You can put
them wherever you like, it's not my problem. They're fully relocatable.
Toolchains built with crosstool-NG are also fully relocatable.
Having the user tell before hand where to install the stuff is also
another good option.
Post by Rob Landley
[*] Remove documentation (NEW)
Nice, and possibly the first question someone who _isn't_ a cross compiler
toolchain developer (but just wants to build and use the thing) might
actually be interested in.
:-)
Post by Rob Landley
Your ./configure still requires you to install makeinfo no matter what this is
set to. You have to install the package so this can delete its output?
Unfortunately, gcc/glibc/... build and install their documentation by
default. I haven't seen any ./configure option that would prevent them
from doing so... :-(
Post by Rob Landley
Wouldn't it be better to group this with a "strip the resulting binaries"
option, and any other space saving switches? (I'm just assuming you _have_
them, somewhere...)
Nope. But that's a good idea. :-)
Post by Rob Landley
[*] Render the toolchain read-only (NEW)
This is something the end user can do fairly easily for themselves, and I'm
not quite sure what the advantage of doing it is supposed to be anyway. In
any case it's an install option, and should probably go with other install
options, but I personally wouldn't have bothered having this option at all.
In fact, it's here and ON by default, and why this is so is clearly explained
both in docs/overview.txt and in the help of this option. Well, that might not
be so obvious, after all. :-(
Post by Rob Landley
[ ] Force downloads (NEW)
I noticed your build doesn't detect whether or not the tarballs downloaded
properly.
Forcing re-downloads every build puts unnecessary strain on the mirrors, and
seems a bit impolite. (Plus your re-download can time out halfway through if
the net hiccups.) But the alternative you've got is your infrastructure
won't notice corrupted tarballs other than by dying.
Yeah. Sad. Will work on it. until now, it was a minor problem, as there were
more important ones. Now that most of the stuff is functional, it's time to
polish things...
Post by Rob Landley
[ ] Use a proxy (NEW) --->
Wow, are these still used in 2009? Ok? (It just never came up for me...)
Yes! Big and not-so-big companies have proxies to connect you to the internet
and they use that as a filter against, well, preventing you from doing pr0n at
work, or going on other unlawful sites, such as hacking and stuff...
Post by Rob Landley
[ ] Use LAN mirror (NEW) --->
In the sub-menu this options, why do you have individual selections instead of
just having 'em provide a URL prefix pointing to the directory in which to
find the packages in question? You already know the name of each package
you're looking for...
Hmmm... There must have been a good idea behind that... Can't think of it
any more... :-(
Post by Rob Landley
(10) connection timeout (NEW)
This is an implementation detail. Users should hardly ever care.
No. I have a case were the network is sooo slow that connections are
established well after the 10s default timeout (17s if I remember well).
Post by Rob Landley
My system uses wget instead of curl (because wget is in busybox and curl
isn't).
If you don't have curl, crosstool-NG falls back to using wget. That's just
a matter of taste, here. And no so many people are using busybox-based
workstations. ;-)
Post by Rob Landley
As a higher level design issue, It would have been easier for me to implement
my build system in python than in bash, but the point of doing it in bash is
it's the exact same set of commands you'd run on the command line, in the
order you'd run them, to do this yourself by hand. So to an extent the shell
scripts act as documentation and a tutorial on how to build cross compilers.
(And I added a lot of #comments to help out there, because I _expect_ people
to read the scripts if they care about much more than just grabbing prebuilt
binary tarballs and using them to cross compile stuff.)
That paragraph also applies to crosstool-NG. Word for word, except for
the python stuff, that I don't grok.
Post by Rob Landley
[ ] Stop after downloading tarballs (NEW)
This seems like it should be a command line option.
Granted, same answer as for "Force downloads"
Post by Rob Landley
[ ] Force extractions (NEW)
Ah, you cache the results of tarball extraction too. I hadn't noticed. (I
hadn't bothered to mention that mine's doing it because it's just an
implementation detail.)
This is one of the things my setupfor function does: it extracts source into
build/sources, in a subdirectory with the same name as the package.
Unfortunately not all package are good boys. Some have a hyphen in the
package name and a dash in the corresponding directory.
Post by Rob Landley
Again, I detect "good/stale" cached data via sha1sums.
I'm missing this. It's on my TODO as well, but low priority...
Post by Rob Landley
[*] Override config.{guess,sub} (NEW)
I can sort of see this, but it's one of those "you really, really, really need
to know what you're doing, and you might be better off patching or upgrading
the package in question instead".
Yep. Some packages still don't know about *-*-linux-ublibc* tuples...
Sigh...
Post by Rob Landley
[ ] Stop after extracting tarballs (NEW)
Ditto as for "Stop after downloading tarballs".
Post by Rob Landley
(1) Number of parallel jobs (NEW)
My sources/includes.sh autodetects the number of processors and sets CPUS.
You can override it by setting CPUS on the command line. (I often
do "CPUS=1 ./build.sh x86_64" when something breaks so I get more
understandable error messages.)
In general, I try never to ask the user for information I can autodetect sane
defaults for, I just let them override the defaults if they want to.
At work, we have a build farm whose purpose is to build the firmwares for our
targets. The machines are quad-CPUs. Deploying a new toolchain needs not be
done in the snap of fingers, and building the firmwares have the priority.
So I use that to restrain the number of jobs to run in // .
Post by Rob Landley
(0) Maximum allowed load (NEW)
Ooh, that's nice, and something mine doesn't have.
Yeah! One good point! :-)
Post by Rob Landley
Personally I've never had
a clear enough idea of what loadavg's units were to figure out how it equated
to slowing down my desktop, and I've actually found that my laptop's
interactivity going down the drain is almost never due to loadavg, it's due
to running out of memory and the thing going swap happy with the disk pegged
as constantaly active. (The CPU scheduler is way the heck better than the
I/O scheduler, and virtual memory is conceptually horrible and quite possibly
_never_ going to be properly fixed at the theoretical level. You have to
accurately predict the future in order to do it right, that's slightly
_worse_ than solving the halting problem...)
(0) Nice level (NEW)
I already have # of // jobs, and loadavg. Why not having nice as well?...
Post by Rob Landley
[*] Use -pipe (NEW)
Why would you ever bother the user with this? It's a gcc implementation
detail, and these days with a modern 2.6 kernel dentry and page caches you
probably can't even tell the difference in benchmarks because the data never
actually hits the disk anyway.
Have you actually benchmarked the difference?
10% gain on my machine (Dual AMD64, 1GiB RAM) at the time of testing.
Much less now I have a Quad-core with 4GiB RAM.
Post by Rob Landley
[ ] Use 'ash' as CONFIG_SHELL (NEW)
A) I haven't got /bin/ash installed. Presumablly you need to install it since
the help says it's calling it from an absolute path?
crosstool-NG does not build and install it. If the user wants that, he/she's
responsible for installing it, yes. Maybe I should build my own...
Post by Rob Landley
B) If your scripts are so slow that you need a faster shell to run them,
possibly the problem is with the scripts rather than with the shell?
My scripts are not so slow. ./configure scripts and Makefiles are.
Again, using dash, the build went 10%/15% faster on my quad-core.
Post by Rob Landley
I admit that one of the potential weaknesses of my current system is that it
calls #!/bin/bash instead of #!/bin/sh. I agonized over that one a bit. But
I stayed with bash because A) dash is seriously broken, B) bash has been the
default shell of Linux since before the 0.0.1 release.
I do explicitly call bash as well, because I use bashisms in my scripts.
./configure and Makefiles should be POSIX compliant. I am not.
Post by Rob Landley
Maximum log level to see: (INFO) --->
I don't have a decent idea of what you get with each of these. (Yes, I've
read the help.)
OK, that may require a litle bit more explanations in the help.
I don't care about the components build log. I just want to know whether the
build was successful, or failed. In crosstool-NG the messages are sorted
in order of importance:
ERROR : crosstool-NG detected an error: failed download/extract/patch/...
incorrect settings, internal error... ERRORs are fatal
WARNING : non-fatal condition that crosstool-NG knows how to work around,
but if you better give it correct input, rather than letting it
guess.
INFO : informs the user of the overall process going on. Very terse.
Tells the current high-level step being done: doanloading,
extracting, building a component...
EXTRA : informs the user with a finer level of what's going on. For
each steps listed above, prints sub-sequences: package being
downloaded/exxtracted/patched, sub-step in building a component:
./configure-ing, make-ing, installing...
DEBUG : messages aimed at debugging crosstool-NG's behavior.
ALL : print everything: ./configure output, make output...
Post by Rob Landley
# Change window title bar to package now
echo -en "\033]2;$ARCH_NAME $STAGE_NAME $PACKAGE\007"
So you can see in the window title bar what architecture, stage, and package
it's currently building.
Hmm... Nice!
Post by Rob Landley
[ ] Warnings from the tools' builds (NEW)
Again, filtering the output of the build I leave to the user. They're better
at it,
From an end-user perspective (yes, I *am* using crosstool-NG ;-) ), I don't
care what commands are being executed to build such or such package, just
that it's doing it, and that it's not stuck.
Post by Rob Landley
and 90% of the time they just want to know that it's still going, or
that it succeeded, or what error it died with.
Yep. Exactly.
Post by Rob Landley
But I can only _guess_ what they want, so I don't.
Not wrong per se.
Post by Rob Landley
In general, I try not to
assume they're not going to want to do some insane crazy thing I never
thought of, because usually I'm the one doing the insane crazy things the
people who wrote the stuff I'm using never thought of, so I sympathize.
;-)
Post by Rob Landley
[*] Progress bar (NEW)
I have the "dotprogress" function I use for extracting tarballs, prints a
period every 25 lines of input.
Mine rotates the bar every tenlines. Which is the better? ;-)
Post by Rob Landley
I used to change the color of the output so you could see at a glance what
stage it was, but people complained
Right. crosstool-NG's output used to be colored, but that just plainly sucks.
And, believe it or not, some people are still using non-color-capable
terminals...
Post by Rob Landley
[*] Log to a file (NEW)
Again, "./build.sh 2>&1 | tee out.txt". Pretty much programming 101 these
days, if you haven't learned that for building all the other source packages
out there, cross compiling probably isn't something you're ready for.
Don't believe that. I have seen many a newbie asked to build embedded stuff
on an exotic board that barely had support for in Linux, said newby just
getting out of school, totaly amazed at the fact that you could actually
run something else than Windows on a PC, let alone that something else
than PCs even existed...

(Well that newbie has grown since, and is now quite a capable Linux guru).
Post by Rob Landley
And I'm at the end of this menu, so I'll pause here for now. (And you were
apologizing for writing a long message... :)
I knew you would surpass me in this respect! ;-P

( Woohoo! I caught up with your mails! :-) )

Regards,
Yann E. MORIN.
--
.-----------------.--------------------.------------------.--------------------.
| Yann E. MORIN | Real-Time Embedded | /"\ ASCII RIBBON | Erics' conspiracy: |
| +0/33 662376056 | Software Designer | \ / CAMPAIGN | ___ |
| --==< ^_^ >==-- `------------.-------: X AGAINST | \e/ There is no |
| http://ymorin.is-a-geek.org/ | _/*\_ | / \ HTML MAIL | v conspiracy. |
`------------------------------^-------^------------------^--------------------'


--
For unsubscribe information see http://sourceware.org/lists.html#faq
Stefan Hallas Andersen
2009-04-07 00:11:50 UTC
Permalink
Hi Rob and All,

It sounds to me like you're problem with crosstool-ng is that it isn't
firmware-linux and you're now using the crosstool-ng mailing list to
promote your own design and tool. I've tried to read through you're
extreamly long comparison and justification of firmware linux and I
think most of your points boil down to personal preference and believes
more than actual valid design arguments agaist crosstool-ng.

- These tools are obviously very different designs and doesn't overlap.

In short, my take on crosstool-ng vs. firmware linux is that one is well
contained, well done, works and is usefull - while the other just isn't
ready for primetime, looks like an interesting research project and is a
great proof of concept.

Where this not supposed to be a crosstool-ng design discussion thread
instead of a >> who can write the longest emails competition << ?

Anyway, just my 0.02$

Best regards,
Stefan Hallas Andersen
Cisco Systems Inc.
-----Original Message-----
Sent: Mon 4/6/2009 3:12 PM
Cc: Rob Landley
Subject: Re: [crosstool-NG] Design discussion
Rob,
All,
OK, I think I'm coping with the back-log ;-)
Post by Rob Landley
Post by Yann E. MORIN
2.b) Ease configuration of the toolchain
In the state, configuring crosstool required editing a file
containing
Post by Rob Landley
Post by Yann E. MORIN
shell variables assignements. There was no proper documentation at
what
Post by Rob Landley
Post by Yann E. MORIN
variables were used, and no clear explanations about each
variables
Post by Rob Landley
Post by Yann E. MORIN
meaning.
My response to this problem was to write documentation.
Sure. But documentation is not all. Fixing and enhancing both the code
and the configuration scheme is also a way to achieve a better
end-user
experience (Ahaha! I've wandered too much with the marketting dept lately).
Post by Rob Landley
While I've used kconfig myself, there's an old saying: "If all you
have is a
Post by Rob Landley
hammer, everything looks like a nail".
Hehe! Show me a better and simpler way.
Post by Rob Landley
The failure mode of kconfig is having so much granularity
So what? Should I restrict what the end-user is allowed to do, based solely
on my own experience? If the possibility exists in the tools, then why
prevent the user from using (or not using) that? Configuring a
compiler
has soooo many options, it drives you mad. Quite a lot can be inferred
from higher-level options, such as the arch name and the CPU variant, or
the ABI...
Post by Rob Landley
Ironically, kconfig is only really worth using when you have enough
config
Post by Rob Landley
options to bother with it.
But there *are* a lot of options! There isn't even all that could be
available!
Post by Rob Landley
When you have small numbers of config options
that are usually going to be off, I prefer environment variables
(with a
Post by Rob Landley
config file in which you can set those in a persistent manner) or
command
Post by Rob Landley
line options. Since you can set an environment variable on the
command line,
Post by Rob Landley
FORK=1 ./buildall.sh
I lean towards those. Possibly a matter of personal taste...
I think so. To configure stuff, I prefer having a GUI that is not vi, but
that is still simple. and the kconfig language and its mconf
interpreter
seems quite fitted, even if it is not the best. But here is no
alternative
that I'm aware of. Plus, it is quite well known thanks to the Linux kernel
using it.
Post by Rob Landley
Getting back to menuconfig, since it _is_ so central to your design,
let's
Post by Rob Landley
look at the menuconfig entries. I still have 1.3.2 installed here,
which
Post by Rob Landley
starts with nine sub-menus, let's go into the first, "paths and misc
The first three options in the first menu aren't immediately useful
to a
Post by Rob Landley
[ ] Use obsolete features
[ ] Try features marked as EXPERIMENTAL (NEW)
[ ] Debug crosstool-NG (NEW)
I dunno what your obsolete versions are, I don't know what your
experimental
Post by Rob Landley
options are, and I dunno what debugging crosstool-ng does. I am not
currently qualified to make any decisions about them, because I
don't know
Post by Rob Landley
what they actually control.
Looking at the help... the "obsolete features" thing seems useless?
We've
Post by Rob Landley
already got menus to select kernel and gcc versions, this just hides
some of
Post by Rob Landley
those versions? Why? (Shouldn't it default to the newest stable
version?
Post by Rob Landley
If it doesn't, shouldn't it be _obvious_ that the newest stable
version is
Post by Rob Landley
probably what you want?)
OK, obsolete means that I can't afford ensuring that they still build.
Post by Rob Landley
Marking old versions "deprecated" might make a certain mount of
sense.
Post by Rob Landley
marking them obsolete and hiding them, but still having them
available, less
Post by Rob Landley
so.
OK, deprecated is much meaningful, I admit to it. Let's set for
s/OBSOLETE/DEPRECATED/, then.
Post by Rob Landley
Similarly, the "experimental" one seems useless because when you
enable it the
Post by Rob Landley
experimental versions already say "EXPERIMENTAL" in their
descriptions
Post by Rob Landley
(wandered around until I found the binutils version choice menu and
looked at
Post by Rob Landley
it to be sure). They're marked anyway, so why is an option to hide
them an
Post by Rob Landley
improvement?
EXPERIMENTAL in the prompt is just a string. A config knob makes the user
really aware that he/she's trying something that might break.
Plus, it marks the resulting config file as containing EXPERIMENTAL
features/versions/... and is easier to process.
Post by Rob Landley
As for the third, wasn't there a debug menu? Why is "Debug
crostool-NG" in
Post by Rob Landley
the paths menu? (Rummage, rummage... Ah, I see, the debug menu is a
list of
Post by Rob Landley
packages you might want to build and add to the toolchain. Ok, sort
of makes
Post by Rob Landley
sense. Still, the third thing a newbie sees going through in order
as
Post by Rob Landley
a "very very expert" option. Moving on...)
That's why the former in titled "Debug crosstol-NG", while the latter
is titled "debug facilities". Again, maybe the wording is wrong.
Post by Rob Landley
() Local tarballs directory (NEW)
(${CT_TOP_DIR}/targets) Working directory (NEW)
(${HOME}/x-tools/${CT_TARGET}) Prefix directory (NEW)
Most users aren't going to care where the local tarballs directory
is, or the
Post by Rob Landley
working directory.
Most. Not all. And the help entries are here to tel the user whether
it wise to change.
Post by Rob Landley
The "prefix directory" is presumably different from where
we just installed with --prefix.
The help roughly says: the path were the toolchain is expected to run from.
Unfortunately, there is yet no support for DESTDIR, the place where the
toolchain will be installed, to allow installing out-of-tree. For the time
being, the DESTDIR is plainly / that is, the toolchain is expected to run
on the system it is built on. But that should eventually be fixed.
Post by Rob Landley
I suppose it's nice that you can override
the defaults, but having it be one of the first questions a user's
posed with
Post by Rob Landley
when going through the options in order trying to configure the
thing isn't
Post by Rob Landley
really very helpful. It's not my problem, just _work_.
Where should I install the toolchain? In the user's home directory?
This is indeed the default, but you are pestering against it!
If not in ${HOME}, where should I install the toolchain? In /opt ?
In /usr/local ? Bah, most useres don't have right access there.
Except root. But building as root is asking for problems!
Post by Rob Landley
(I also don't know
what CT_TOP_DIR and CT_TARGET are, I'd have to go look them up.)
docs/overview.txt is advertised in the top-level README.
Post by Rob Landley
For comparison, my system creates a tarball from the resulting cross
compiler,
Post by Rob Landley
and leaves an extracted copy as "build/cross-compiler-$ARCH". You
can put
Post by Rob Landley
them wherever you like, it's not my problem. They're fully
relocatable.
Toolchains built with crosstool-NG are also fully relocatable.
Having the user tell before hand where to install the stuff is also
another good option.
Post by Rob Landley
[*] Remove documentation (NEW)
Nice, and possibly the first question someone who _isn't_ a cross
compiler
Post by Rob Landley
toolchain developer (but just wants to build and use the thing)
might
Post by Rob Landley
actually be interested in.
:-)
Post by Rob Landley
Your ./configure still requires you to install makeinfo no matter
what this is
Post by Rob Landley
set to. You have to install the package so this can delete its
output?
Unfortunately, gcc/glibc/... build and install their documentation by
default. I haven't seen any ./configure option that would prevent them
from doing so... :-(
Post by Rob Landley
Wouldn't it be better to group this with a "strip the resulting
binaries"
Post by Rob Landley
option, and any other space saving switches? (I'm just assuming you
_have_
Post by Rob Landley
them, somewhere...)
Nope. But that's a good idea. :-)
Post by Rob Landley
[*] Render the toolchain read-only (NEW)
This is something the end user can do fairly easily for themselves,
and I'm
Post by Rob Landley
not quite sure what the advantage of doing it is supposed to be
anyway. In
Post by Rob Landley
any case it's an install option, and should probably go with other
install
Post by Rob Landley
options, but I personally wouldn't have bothered having this option
at all.
In fact, it's here and ON by default, and why this is so is clearly explained
both in docs/overview.txt and in the help of this option. Well, that might not
be so obvious, after all. :-(
Post by Rob Landley
[ ] Force downloads (NEW)
I noticed your build doesn't detect whether or not the tarballs
downloaded
Post by Rob Landley
properly.
Forcing re-downloads every build puts unnecessary strain on the
mirrors, and
Post by Rob Landley
seems a bit impolite. (Plus your re-download can time out halfway
through if
Post by Rob Landley
the net hiccups.) But the alternative you've got is your
infrastructure
Post by Rob Landley
won't notice corrupted tarballs other than by dying.
Yeah. Sad. Will work on it. until now, it was a minor problem, as there were
more important ones. Now that most of the stuff is functional, it's time to
polish things...
Post by Rob Landley
[ ] Use a proxy (NEW) --->
Wow, are these still used in 2009? Ok? (It just never came up for
me...)
Yes! Big and not-so-big companies have proxies to connect you to the internet
and they use that as a filter against, well, preventing you from doing pr0n at
work, or going on other unlawful sites, such as hacking and stuff...
Post by Rob Landley
[ ] Use LAN mirror (NEW) --->
In the sub-menu this options, why do you have individual selections
instead of
Post by Rob Landley
just having 'em provide a URL prefix pointing to the directory in
which to
Post by Rob Landley
find the packages in question? You already know the name of each
package
Post by Rob Landley
you're looking for...
Hmmm... There must have been a good idea behind that... Can't think of it
any more... :-(
Post by Rob Landley
(10) connection timeout (NEW)
This is an implementation detail. Users should hardly ever care.
No. I have a case were the network is sooo slow that connections are
established well after the 10s default timeout (17s if I remember well).
Post by Rob Landley
My system uses wget instead of curl (because wget is in busybox and
curl
Post by Rob Landley
isn't).
If you don't have curl, crosstool-NG falls back to using wget. That's just
a matter of taste, here. And no so many people are using busybox-based
workstations. ;-)
Post by Rob Landley
As a higher level design issue, It would have been easier for me to
implement
Post by Rob Landley
my build system in python than in bash, but the point of doing it in
bash is
Post by Rob Landley
it's the exact same set of commands you'd run on the command line,
in the
Post by Rob Landley
order you'd run them, to do this yourself by hand. So to an extent
the shell
Post by Rob Landley
scripts act as documentation and a tutorial on how to build cross
compilers.
Post by Rob Landley
(And I added a lot of #comments to help out there, because I
_expect_ people
Post by Rob Landley
to read the scripts if they care about much more than just grabbing
prebuilt
Post by Rob Landley
binary tarballs and using them to cross compile stuff.)
That paragraph also applies to crosstool-NG. Word for word, except for
the python stuff, that I don't grok.
Post by Rob Landley
[ ] Stop after downloading tarballs (NEW)
This seems like it should be a command line option.
Granted, same answer as for "Force downloads"
Post by Rob Landley
[ ] Force extractions (NEW)
Ah, you cache the results of tarball extraction too. I hadn't
noticed. (I
Post by Rob Landley
hadn't bothered to mention that mine's doing it because it's just an
implementation detail.)
This is one of the things my setupfor function does: it extracts
source into
Post by Rob Landley
build/sources, in a subdirectory with the same name as the package.
Unfortunately not all package are good boys. Some have a hyphen in the
package name and a dash in the corresponding directory.
Post by Rob Landley
Again, I detect "good/stale" cached data via sha1sums.
I'm missing this. It's on my TODO as well, but low priority...
Post by Rob Landley
[*] Override config.{guess,sub} (NEW)
I can sort of see this, but it's one of those "you really, really,
really need
Post by Rob Landley
to know what you're doing, and you might be better off patching or
upgrading
Post by Rob Landley
the package in question instead".
Yep. Some packages still don't know about *-*-linux-ublibc* tuples...
Sigh...
Post by Rob Landley
[ ] Stop after extracting tarballs (NEW)
Ditto as for "Stop after downloading tarballs".
Post by Rob Landley
(1) Number of parallel jobs (NEW)
My sources/includes.sh autodetects the number of processors and sets
CPUS.
Post by Rob Landley
You can override it by setting CPUS on the command line. (I often
do "CPUS=1 ./build.sh x86_64" when something breaks so I get more
understandable error messages.)
In general, I try never to ask the user for information I can
autodetect sane
Post by Rob Landley
defaults for, I just let them override the defaults if they want to.
At work, we have a build farm whose purpose is to build the firmwares for our
targets. The machines are quad-CPUs. Deploying a new toolchain needs not be
done in the snap of fingers, and building the firmwares have the priority.
So I use that to restrain the number of jobs to run in // .
Post by Rob Landley
(0) Maximum allowed load (NEW)
Ooh, that's nice, and something mine doesn't have.
Yeah! One good point! :-)
Post by Rob Landley
Personally I've never had
a clear enough idea of what loadavg's units were to figure out how
it equated
Post by Rob Landley
to slowing down my desktop, and I've actually found that my laptop's
interactivity going down the drain is almost never due to loadavg,
it's due
Post by Rob Landley
to running out of memory and the thing going swap happy with the
disk pegged
Post by Rob Landley
as constantaly active. (The CPU scheduler is way the heck better
than the
Post by Rob Landley
I/O scheduler, and virtual memory is conceptually horrible and quite
possibly
Post by Rob Landley
_never_ going to be properly fixed at the theoretical level. You
have to
Post by Rob Landley
accurately predict the future in order to do it right, that's
slightly
Post by Rob Landley
_worse_ than solving the halting problem...)
(0) Nice level (NEW)
I already have # of // jobs, and loadavg. Why not having nice as well?...
Post by Rob Landley
[*] Use -pipe (NEW)
Why would you ever bother the user with this? It's a gcc
implementation
Post by Rob Landley
detail, and these days with a modern 2.6 kernel dentry and page
caches you
Post by Rob Landley
probably can't even tell the difference in benchmarks because the
data never
Post by Rob Landley
actually hits the disk anyway.
Have you actually benchmarked the difference?
10% gain on my machine (Dual AMD64, 1GiB RAM) at the time of testing.
Much less now I have a Quad-core with 4GiB RAM.
Post by Rob Landley
[ ] Use 'ash' as CONFIG_SHELL (NEW)
A) I haven't got /bin/ash installed. Presumablly you need to
install it since
Post by Rob Landley
the help says it's calling it from an absolute path?
crosstool-NG does not build and install it. If the user wants that, he/she's
responsible for installing it, yes. Maybe I should build my own...
Post by Rob Landley
B) If your scripts are so slow that you need a faster shell to run
them,
Post by Rob Landley
possibly the problem is with the scripts rather than with the shell?
My scripts are not so slow. ./configure scripts and Makefiles are.
Again, using dash, the build went 10%/15% faster on my quad-core.
Post by Rob Landley
I admit that one of the potential weaknesses of my current system is
that it
Post by Rob Landley
calls #!/bin/bash instead of #!/bin/sh. I agonized over that one a
bit. But
Post by Rob Landley
I stayed with bash because A) dash is seriously broken, B) bash has
been the
Post by Rob Landley
default shell of Linux since before the 0.0.1 release.
I do explicitly call bash as well, because I use bashisms in my scripts.
./configure and Makefiles should be POSIX compliant. I am not.
Post by Rob Landley
Maximum log level to see: (INFO) --->
I don't have a decent idea of what you get with each of these.
(Yes, I've
Post by Rob Landley
read the help.)
OK, that may require a litle bit more explanations in the help.
I don't care about the components build log. I just want to know whether the
build was successful, or failed. In crosstool-NG the messages are sorted
ERROR : crosstool-NG detected an error: failed
download/extract/patch/...
incorrect settings, internal error... ERRORs are fatal
WARNING : non-fatal condition that crosstool-NG knows how to work around,
but if you better give it correct input, rather than letting it
guess.
INFO : informs the user of the overall process going on. Very terse.
Tells the current high-level step being done: doanloading,
extracting, building a component...
EXTRA : informs the user with a finer level of what's going on. For
each steps listed above, prints sub-sequences: package being
./configure-ing, make-ing, installing...
DEBUG : messages aimed at debugging crosstool-NG's behavior.
ALL : print everything: ./configure output, make output...
Post by Rob Landley
# Change window title bar to package now
echo -en "\033]2;$ARCH_NAME $STAGE_NAME $PACKAGE\007"
So you can see in the window title bar what architecture, stage, and
package
Post by Rob Landley
it's currently building.
Hmm... Nice!
Post by Rob Landley
[ ] Warnings from the tools' builds (NEW)
Again, filtering the output of the build I leave to the user.
They're better
Post by Rob Landley
at it,
From an end-user perspective (yes, I *am* using crosstool-NG ;-) ), I don't
care what commands are being executed to build such or such package, just
that it's doing it, and that it's not stuck.
Post by Rob Landley
and 90% of the time they just want to know that it's still going,
or
Post by Rob Landley
that it succeeded, or what error it died with.
Yep. Exactly.
Post by Rob Landley
But I can only _guess_ what they want, so I don't.
Not wrong per se.
Post by Rob Landley
In general, I try not to
assume they're not going to want to do some insane crazy thing I
never
Post by Rob Landley
thought of, because usually I'm the one doing the insane crazy
things the
Post by Rob Landley
people who wrote the stuff I'm using never thought of, so I
sympathize.
;-)
Post by Rob Landley
[*] Progress bar (NEW)
I have the "dotprogress" function I use for extracting tarballs,
prints a
Post by Rob Landley
period every 25 lines of input.
Mine rotates the bar every tenlines. Which is the better? ;-)
Post by Rob Landley
I used to change the color of the output so you could see at a
glance what
Post by Rob Landley
stage it was, but people complained
Right. crosstool-NG's output used to be colored, but that just plainly sucks.
And, believe it or not, some people are still using non-color-capable
terminals...
Post by Rob Landley
[*] Log to a file (NEW)
Again, "./build.sh 2>&1 | tee out.txt". Pretty much programming 101
these
Post by Rob Landley
days, if you haven't learned that for building all the other source
packages
Post by Rob Landley
out there, cross compiling probably isn't something you're ready
for.
Don't believe that. I have seen many a newbie asked to build embedded stuff
on an exotic board that barely had support for in Linux, said newby just
getting out of school, totaly amazed at the fact that you could actually
run something else than Windows on a PC, let alone that something else
than PCs even existed...
(Well that newbie has grown since, and is now quite a capable Linux guru).
Post by Rob Landley
And I'm at the end of this menu, so I'll pause here for now. (And
you were
Post by Rob Landley
apologizing for writing a long message... :)
I knew you would surpass me in this respect! ;-P
( Woohoo! I caught up with your mails! :-) )
Regards,
Yann E. MORIN.
--
.-----------------.--------------------.------------------.--------------------.
| Yann E. MORIN | Real-Time Embedded | /"\ ASCII RIBBON | Erics' conspiracy: |
| +0/33 662376056 | Software Designer | \ / CAMPAIGN | ___
|
| --==< ^_^ >==-- `------------.-------: X AGAINST | \e/
There is no |
| http://ymorin.is-a-geek.org/ | _/*\_ | / \ HTML MAIL | v
conspiracy. |
`------------------------------^-------^------------------^--------------------'
--
For unsubscribe information see http://sourceware.org/lists.html#faq
--
For unsubscribe information see http://sourceware.org/lists.html#faq
Rob Landley
2009-04-07 03:06:59 UTC
Permalink
Post by Stefan Hallas Andersen
Hi Rob and All,
It sounds to me like you're problem with crosstool-ng is that it isn't
firmware-linux and you're now using the crosstool-ng mailing list to
promote your own design and tool.
1) Actually this is the crossgcc mailing list, it's not specific to
crosstool-ng. (This list dates back to 1997.)

2) I was asked for my opinion.

3) That's honestly not my intent.
Post by Stefan Hallas Andersen
Where this not supposed to be a crosstool-ng design discussion thread
instead of a >> who can write the longest emails competition << ?
I've so far replied to about the first 1/3 of Yann's first email.

We can take this back off-list if it's annoying you,

Rob
--
GPLv3 is to GPLv2 what Attack of the Clones is to The Empire Strikes Back.

--
For unsubscribe information see http://sourceware.org/lists.html#faq
Stefan Hallas Andersen
2009-04-07 03:42:52 UTC
Permalink
Hi Rob & All,

Don't get me wrong - I'm perfectly fine with you voicing your oppinion
just like I'm entitled to mine. This is an open forum and everybody can
chime in, right ?

1) Sure, but the subject line does say [crosstool-NG] Design discussion,
but let's leave it here.
2) Okay - and I appreciate yours just like anyone else.
3) I hope not - but thank you for pointing that out. As an author of a
tool in the same category I'd consider your points more valuable if you
wouldn't reference your own solution but just stick to pointing out the
actual problems with the solution at hand. Not that what you're saying
is necessarily wrong ; but let's discuss the problems before we do the
solutions.

Best regards,
Stefan Hallas Andersen
Cisco Systems Inc.
Post by Rob Landley
Post by Stefan Hallas Andersen
Hi Rob and All,
It sounds to me like you're problem with crosstool-ng is that it
isn't
Post by Stefan Hallas Andersen
firmware-linux and you're now using the crosstool-ng mailing list to
promote your own design and tool.
1) Actually this is the crossgcc mailing list, it's not specific to
crosstool-ng. (This list dates back to 1997.)
2) I was asked for my opinion.
3) That's honestly not my intent.
Post by Stefan Hallas Andersen
Where this not supposed to be a crosstool-ng design discussion
thread
Post by Stefan Hallas Andersen
instead of a >> who can write the longest emails competition << ?
I've so far replied to about the first 1/3 of Yann's first email.
We can take this back off-list if it's annoying you,
Rob
--
GPLv3 is to GPLv2 what Attack of the Clones is to The Empire Strikes Back.
--
For unsubscribe information see http://sourceware.org/lists.html#faq
Mark A. Miller
2009-04-08 09:14:25 UTC
Permalink
On Mon, Apr 6, 2009 at 7:11 PM, Stefan Hallas Andersen
Post by Stefan Hallas Andersen
Hi Rob and All,
It sounds to me like you're problem with crosstool-ng is that it isn't
firmware-linux and you're now using the crosstool-ng mailing list to
promote your own design and tool.
As Rob mentioned, it is the crossgcc mailing list, which has been
mostly abandoned and Yann got approval to change the mailing list name
evidently to crosstool-ng, but still isn't a *proper* crosstool-ng
mailing list. (Yann, I'm willing to provide proper webhosting with
crosstool-ng.com or whatnot, with a proper mailing list, if you care.)
Post by Stefan Hallas Andersen
I've tried to read through you're
extreamly long comparison and justification of firmware linux and I
think most of your points boil down to personal preference and believes
more than actual valid design arguments agaist crosstool-ng.
- These tools are obviously very different designs and doesn't overlap.
Both do cross-compilers. There is significant overlap there. Getting
uClibc++ to work in a cross-compiling environment was taken from
Firmware Linux.
Post by Stefan Hallas Andersen
In short, my take on crosstool-ng vs. firmware linux is that one is well
contained, well done, works and is usefull - while the other just isn't
ready for primetime, looks like an interesting research project and is a
great proof of concept.
Other than GDB, which Yann has a particular advantage, both are
capable of producing perfectly working cross-compilers for uClibc, and
FWL for uClibc++ for multiple architectures. PPC 440 included.

I believe the discussion amounted to how a cross-compiler was built,
not the red herring of, "Ignore cross-compiling entirely and do native
compilation!" That is where they fork in totally different directions.
But as for building cross-compilers, they're still quite related. Yann
does a good job in selecting seventeen billion different tools such as
binutils and gcc and whatnot, as well as support for glibc, which is
one thing FWL can't do, and has no interest in doing.

Other than that, they're cross-compilers at their core. Personal
biases aside, I believe Rob has an honest intent to improve
crosstool-ng so that it's a sane cross-compiler for glibc-related
products, as well as uClibc. There's non-overlap, in that Yann
attempts to target many, many versions, while fracturing his testing
base, there's a market for it, but there's distinct overlap in how a
cross-compiler should be done.

I've mentioned it to Yann personally, but I'll state it on the list, I
have no issue with him personally, and as for crosstool-ng, it's a
package that he coded in his free time for other people, so criticisms
can only go so far. (Why didn't you do X?!, et cetera). But I think
his cross-compiler could be done better, and that's why he and Rob
decided to have their discussion public.
Post by Stefan Hallas Andersen
Where this not supposed to be a crosstool-ng design discussion thread
instead of a >> who can write the longest emails competition << ?
Yann asked Rob as to his opinions, and as opinions go, they're wordy.
So are mine.
Post by Stefan Hallas Andersen
Anyway, just my 0.02$
Of course!
Post by Stefan Hallas Andersen
Best regards,
Stefan Hallas Andersen
Cisco Systems Inc.
-----Original Message-----
Sent: Mon 4/6/2009 3:12 PM
Cc: Rob Landley
Subject: Re: [crosstool-NG] Design discussion
Rob,
All,
OK, I think I'm coping with the back-log ;-)
Post by Rob Landley
Post by Yann E. MORIN
2.b) Ease configuration of the toolchain
In the state, configuring crosstool required editing a file
containing
Post by Rob Landley
Post by Yann E. MORIN
shell variables assignements. There was no proper documentation at
what
Post by Rob Landley
Post by Yann E. MORIN
variables were used, and no clear explanations about each
variables
Post by Rob Landley
Post by Yann E. MORIN
meaning.
My response to this problem was to write documentation.
Sure. But documentation is not all. Fixing and enhancing both the code
and the configuration scheme is also a way to achieve a better end-user
experience (Ahaha! I've wandered too much with the marketting dept lately).
Post by Rob Landley
While I've used kconfig myself, there's an old saying: "If all you
have is a
Post by Rob Landley
hammer, everything looks like a nail".
Hehe! Show me a better and simpler way.
Post by Rob Landley
The failure mode of kconfig is having so much granularity
So what? Should I restrict what the end-user is allowed to do, based solely
on my own experience? If the possibility exists in the tools, then why
prevent the user from using (or not using) that? Configuring a compiler
has soooo many options, it drives you mad. Quite a lot can be inferred
from higher-level options, such as the arch name and the CPU variant, or
the ABI...
Post by Rob Landley
Ironically, kconfig is only really worth using when you have enough
config
Post by Rob Landley
options to bother with it.
But there *are* a lot of options! There isn't even all that could be
available!
Post by Rob Landley
When you have small numbers of config options
that are usually going to be off, I prefer environment variables
(with a
Post by Rob Landley
config file in which you can set those in a persistent manner) or
command
Post by Rob Landley
line options.  Since you can set an environment variable on the
command line,
Post by Rob Landley
  FORK=1 ./buildall.sh
I lean towards those.  Possibly a matter of personal taste...
I think so. To configure stuff, I prefer having a GUI that is not vi, but
that is still simple. and the kconfig language and its mconf
interpreter
seems quite fitted, even if it is not the best. But here is no alternative
that I'm aware of. Plus, it is quite well known thanks to the Linux kernel
using it.
Post by Rob Landley
Getting back to menuconfig, since it _is_ so central to your design,
let's
Post by Rob Landley
look at the menuconfig entries.  I still have 1.3.2 installed here,
which
Post by Rob Landley
starts with nine sub-menus, let's go into the first, "paths and misc
The first three options in the first menu aren't immediately useful
to a
Post by Rob Landley
  [ ] Use obsolete features
  [ ] Try features marked as EXPERIMENTAL (NEW)
  [ ] Debug crosstool-NG (NEW)
I dunno what your obsolete versions are, I don't know what your
experimental
Post by Rob Landley
options are, and I dunno what debugging crosstool-ng does.  I am not
currently qualified to make any decisions about them, because I
don't know
Post by Rob Landley
what they actually control.
Looking at the help... the "obsolete features" thing seems useless?
We've
Post by Rob Landley
already got menus to select kernel and gcc versions, this just hides
some of
Post by Rob Landley
those versions?  Why?  (Shouldn't it default to the newest stable
version?
Post by Rob Landley
If it doesn't, shouldn't it be _obvious_ that the newest stable
version is
Post by Rob Landley
probably what you want?)
OK, obsolete means that I can't afford ensuring that they still build.
Post by Rob Landley
Marking old versions "deprecated" might make a certain mount of
sense.
Post by Rob Landley
marking them obsolete and hiding them, but still having them
available, less
Post by Rob Landley
so.
OK, deprecated is much meaningful, I admit to it. Let's set for
s/OBSOLETE/DEPRECATED/, then.
Post by Rob Landley
Similarly, the "experimental" one seems useless because when you
enable it the
Post by Rob Landley
experimental versions already say "EXPERIMENTAL" in their
descriptions
Post by Rob Landley
(wandered around until I found the binutils version choice menu and
looked at
Post by Rob Landley
it to be sure).  They're marked anyway, so why is an option to hide
them an
Post by Rob Landley
improvement?
EXPERIMENTAL in the prompt is just a string. A config knob makes the user
really aware that he/she's trying something that might break.
Plus, it marks the resulting config file as containing EXPERIMENTAL
features/versions/... and is easier to process.
Post by Rob Landley
As for the third, wasn't there a debug menu?  Why is "Debug
crostool-NG" in
Post by Rob Landley
the paths menu?  (Rummage, rummage... Ah, I see, the debug menu is a
list of
Post by Rob Landley
packages you might want to build and add to the toolchain.  Ok, sort
of makes
Post by Rob Landley
sense.  Still, the third thing a newbie sees going through in order
as
Post by Rob Landley
a "very very expert" option.  Moving on...)
That's why the former in titled "Debug crosstol-NG", while the latter
is titled "debug facilities". Again, maybe the wording is wrong.
Post by Rob Landley
  ()  Local tarballs directory (NEW)
  (${CT_TOP_DIR}/targets) Working directory (NEW)
  (${HOME}/x-tools/${CT_TARGET}) Prefix directory (NEW)
Most users aren't going to care where the local tarballs directory
is, or the
Post by Rob Landley
working directory.
Most. Not all. And the help entries are here to tel the user whether
it wise to change.
Post by Rob Landley
The "prefix directory" is presumably different from where
we just installed with --prefix.
The help roughly says: the path were the toolchain is expected to run from.
Unfortunately, there is yet no support for DESTDIR, the place where the
toolchain will be installed, to allow installing out-of-tree. For the time
being, the DESTDIR is plainly / that is, the toolchain is expected to run
on the system it is built on. But that should eventually be fixed.
Post by Rob Landley
I suppose it's nice that you can override
the defaults, but having it be one of the first questions a user's
posed with
Post by Rob Landley
when going through the options in order trying to configure the
thing isn't
Post by Rob Landley
really very helpful.  It's not my problem, just _work_.
Where should I install the toolchain? In the user's home directory?
This is indeed the default, but you are pestering against it!
If not in ${HOME}, where should I install the toolchain? In /opt ?
In /usr/local ? Bah, most useres don't have right access there.
Except root. But building as root is asking for problems!
Post by Rob Landley
(I also don't know
what CT_TOP_DIR and CT_TARGET are, I'd have to go look them up.)
docs/overview.txt is advertised in the top-level README.
Post by Rob Landley
For comparison, my system creates a tarball from the resulting cross
compiler,
Post by Rob Landley
and leaves an extracted copy as "build/cross-compiler-$ARCH".  You
can put
Post by Rob Landley
them wherever you like, it's not my problem.  They're fully
relocatable.
Toolchains built with crosstool-NG are also fully relocatable.
Having the user tell before hand where to install the stuff is also
another good option.
Post by Rob Landley
  [*] Remove documentation (NEW)
Nice, and possibly the first question someone who _isn't_ a cross
compiler
Post by Rob Landley
toolchain developer (but just wants to build and use the thing)
might
Post by Rob Landley
actually be interested in.
:-)
Post by Rob Landley
Your ./configure still requires you to install makeinfo no matter
what this is
Post by Rob Landley
set to.  You have to install the package so this can delete its
output?
Unfortunately, gcc/glibc/... build and install their documentation by
default. I haven't seen any ./configure option that would prevent them
from doing so... :-(
Post by Rob Landley
Wouldn't it be better to group this with a "strip the resulting
binaries"
Post by Rob Landley
option, and any other space saving switches?  (I'm just assuming you
_have_
Post by Rob Landley
them, somewhere...)
Nope. But that's a good idea. :-)
Post by Rob Landley
  [*] Render the toolchain read-only (NEW)
This is something the end user can do fairly easily for themselves,
and I'm
Post by Rob Landley
not quite sure what the advantage of doing it is supposed to be
anyway.  In
Post by Rob Landley
any case it's an install option, and should probably go with other
install
Post by Rob Landley
options, but I personally wouldn't have bothered having this option
at all.
In fact, it's here and ON by default, and why this is so is clearly explained
both in docs/overview.txt and in the help of this option. Well, that might not
be so obvious, after all. :-(
Post by Rob Landley
  [ ] Force downloads (NEW)
I noticed your build doesn't detect whether or not the tarballs
downloaded
Post by Rob Landley
properly.
Forcing re-downloads every build puts unnecessary strain on the
mirrors, and
Post by Rob Landley
seems a bit impolite.  (Plus your re-download can time out halfway
through if
Post by Rob Landley
the net hiccups.)  But the alternative you've got is your
infrastructure
Post by Rob Landley
won't notice corrupted tarballs other than by dying.
Yeah. Sad. Will work on it. until now, it was a minor problem, as there were
more important ones. Now that most of the stuff is functional, it's time to
polish things...
Post by Rob Landley
  [ ] Use a proxy (NEW)  --->
Wow, are these still used in 2009?  Ok?  (It just never came up for
me...)
Yes! Big and not-so-big companies have proxies to connect you to the internet
and they use that as a filter against, well, preventing you from doing pr0n at
work, or going on other unlawful sites, such as hacking and stuff...
Post by Rob Landley
  [ ] Use LAN mirror (NEW)  --->
In the sub-menu this options, why do you have individual selections
instead of
Post by Rob Landley
just having 'em provide a URL prefix pointing to the directory in
which to
Post by Rob Landley
find the packages in question?  You already know the name of each
package
Post by Rob Landley
you're looking for...
Hmmm... There must have been a good idea behind that... Can't think of it
any more... :-(
Post by Rob Landley
  (10) connection timeout (NEW)
This is an implementation detail.  Users should hardly ever care.
No. I have a case were the network is sooo slow that connections are
established well after the 10s default timeout (17s if I remember well).
Post by Rob Landley
My system uses wget instead of curl (because wget is in busybox and
curl
Post by Rob Landley
isn't).
If you don't have curl, crosstool-NG falls back to using wget. That's just
a matter of taste, here. And no so many people are using busybox-based
workstations. ;-)
Post by Rob Landley
As a higher level design issue, It would have been easier for me to
implement
Post by Rob Landley
my build system in python than in bash, but the point of doing it in
bash is
Post by Rob Landley
it's the exact same set of commands you'd run on the command line,
in the
Post by Rob Landley
order you'd run them, to do this yourself by hand.  So to an extent
the shell
Post by Rob Landley
scripts act as documentation and a tutorial on how to build cross
compilers.
Post by Rob Landley
(And I added a lot of #comments to help out there, because I
_expect_ people
Post by Rob Landley
to read the scripts if they care about much more than just grabbing
prebuilt
Post by Rob Landley
binary tarballs and using them to cross compile stuff.)
That paragraph also applies to crosstool-NG. Word for word, except for
the python stuff, that I don't grok.
Post by Rob Landley
  [ ] Stop after downloading tarballs (NEW)
This seems like it should be a command line option.
Granted, same answer as for "Force downloads"
Post by Rob Landley
  [ ] Force extractions (NEW)
Ah, you cache the results of tarball extraction too.  I hadn't
noticed.  (I
Post by Rob Landley
hadn't bothered to mention that mine's doing it because it's just an
implementation detail.)
This is one of the things my setupfor function does: it extracts
source into
Post by Rob Landley
build/sources, in a subdirectory with the same name as the package.
Unfortunately not all package are good boys. Some have a hyphen in the
package name and a dash in the corresponding directory.
Post by Rob Landley
Again, I detect "good/stale" cached data via sha1sums.
I'm missing this. It's on my TODO as well, but low priority...
Post by Rob Landley
  [*] Override config.{guess,sub} (NEW)
I can sort of see this, but it's one of those "you really, really,
really need
Post by Rob Landley
to know what you're doing, and you might be better off patching or
upgrading
Post by Rob Landley
the package in question instead".
Yep. Some packages still don't know about *-*-linux-ublibc* tuples...
Sigh...
Post by Rob Landley
  [ ] Stop after extracting tarballs (NEW)
Ditto as for "Stop after downloading tarballs".
Post by Rob Landley
  (1) Number of parallel jobs (NEW)
My sources/includes.sh autodetects the number of processors and sets
CPUS.
Post by Rob Landley
You can override it by setting CPUS on the command line.  (I often
do "CPUS=1 ./build.sh x86_64" when something breaks so I get more
understandable error messages.)
In general, I try never to ask the user for information I can
autodetect sane
Post by Rob Landley
defaults for, I just let them override the defaults if they want to.
At work, we have a build farm whose purpose is to build the firmwares for our
targets. The machines are quad-CPUs. Deploying a new toolchain needs not be
done in the snap of fingers, and building the firmwares have the priority.
So I use that to restrain the number of jobs to run in // .
Post by Rob Landley
  (0) Maximum allowed load (NEW)
Ooh, that's nice, and something mine doesn't have.
Yeah! One good point! :-)
Post by Rob Landley
Personally I've never had
a clear enough idea of what loadavg's units were to figure out how
it equated
Post by Rob Landley
to slowing down my desktop, and I've actually found that my laptop's
interactivity going down the drain is almost never due to loadavg,
it's due
Post by Rob Landley
to running out of memory and the thing going swap happy with the
disk pegged
Post by Rob Landley
as constantaly active.  (The CPU scheduler is way the heck better
than the
Post by Rob Landley
I/O scheduler, and virtual memory is conceptually horrible and quite
possibly
Post by Rob Landley
_never_ going to be properly fixed at the theoretical level.  You
have to
Post by Rob Landley
accurately predict the future in order to do it right, that's
slightly
Post by Rob Landley
_worse_ than solving the halting problem...)
  (0) Nice level (NEW)
I already have # of // jobs, and loadavg. Why not having nice as well?...
Post by Rob Landley
  [*] Use -pipe (NEW)
Why would you ever bother the user with this?  It's a gcc
implementation
Post by Rob Landley
detail, and these days with a modern 2.6 kernel dentry and page
caches you
Post by Rob Landley
probably can't even tell the difference in benchmarks because the
data never
Post by Rob Landley
actually hits the disk anyway.
Have you actually benchmarked the difference?
10% gain on my machine (Dual AMD64, 1GiB RAM) at the time of testing.
Much less now I have a Quad-core with 4GiB RAM.
Post by Rob Landley
  [ ] Use 'ash' as CONFIG_SHELL (NEW)
A) I haven't got /bin/ash installed.  Presumablly you need to
install it since
Post by Rob Landley
the help says it's calling it from an absolute path?
crosstool-NG does not build and install it. If the user wants that, he/she's
responsible for installing it, yes. Maybe I should build my own...
Post by Rob Landley
B) If your scripts are so slow that you need a faster shell to run
them,
Post by Rob Landley
possibly the problem is with the scripts rather than with the shell?
My scripts are not so slow. ./configure scripts and Makefiles are.
Again, using dash, the build went 10%/15% faster on my quad-core.
Post by Rob Landley
I admit that one of the potential weaknesses of my current system is
that it
Post by Rob Landley
calls #!/bin/bash instead of #!/bin/sh.  I agonized over that one a
bit.  But
Post by Rob Landley
I stayed with bash because A) dash is seriously broken, B) bash has
been the
Post by Rob Landley
default shell of Linux since before the 0.0.1 release.
I do explicitly call bash as well, because I use bashisms in my scripts.
./configure and Makefiles should be POSIX compliant. I am not.
Post by Rob Landley
  Maximum log level to see: (INFO)  --->
I don't have a decent idea of what you get with each of these.
(Yes, I've
Post by Rob Landley
read the help.)
OK, that may require a litle bit more explanations in the help.
I don't care about the components build log. I just want to know whether the
build was successful, or failed. In crosstool-NG the messages are sorted
ERROR   : crosstool-NG detected an error: failed
download/extract/patch/...
          incorrect settings, internal error... ERRORs are fatal
WARNING : non-fatal condition that crosstool-NG knows how to work around,
          but if you better give it correct input, rather than letting
it
          guess.
INFO    : informs the user of the overall process going on. Very
terse.
          Tells the current high-level step being done: doanloading,
          extracting, building a component...
EXTRA   : informs the user with a finer level of what's going on. For
          each steps listed above, prints sub-sequences: package being
          downloaded/exxtracted/patched, sub-step in building a
          ./configure-ing, make-ing, installing...
DEBUG   : messages aimed at debugging crosstool-NG's behavior.
ALL     : print everything: ./configure output, make output...
Post by Rob Landley
  # Change window title bar to package now
  echo -en "\033]2;$ARCH_NAME $STAGE_NAME $PACKAGE\007"
So you can see in the window title bar what architecture, stage, and
package
Post by Rob Landley
it's currently building.
Hmm... Nice!
Post by Rob Landley
  [ ] Warnings from the tools' builds (NEW)
Again, filtering the output of the build I leave to the user.
They're better
Post by Rob Landley
at it,
From an end-user perspective (yes, I *am* using crosstool-NG ;-) ), I don't
care what commands are being executed to build such or such package, just
that it's doing it, and that it's not stuck.
Post by Rob Landley
and 90% of the time they just want to know that it's still going,
or
Post by Rob Landley
that it succeeded, or what error it died with.
Yep. Exactly.
Post by Rob Landley
But I can only _guess_ what they want, so I don't.
Not wrong per se.
Post by Rob Landley
In general, I try not to
assume they're not going to want to do some insane crazy thing I
never
Post by Rob Landley
thought of, because usually I'm the one doing the insane crazy
things the
Post by Rob Landley
people who wrote the stuff I'm using never thought of, so I
sympathize.
;-)
Post by Rob Landley
  [*] Progress bar (NEW)
I have the "dotprogress" function I use for extracting tarballs,
prints a
Post by Rob Landley
period every 25 lines of input.
Mine rotates the bar every tenlines. Which is the better? ;-)
Post by Rob Landley
I used to change the color of the output so you could see at a
glance what
Post by Rob Landley
stage it was, but people complained
Right. crosstool-NG's output used to be colored, but that just plainly sucks.
And, believe it or not, some people are still using non-color-capable
terminals...
Post by Rob Landley
  [*] Log to a file (NEW)
Again, "./build.sh 2>&1 | tee out.txt".  Pretty much programming 101
these
Post by Rob Landley
days, if you haven't learned that for building all the other source
packages
Post by Rob Landley
out there, cross compiling probably isn't something you're ready
for.
Don't believe that. I have seen many a newbie asked to build embedded stuff
on an exotic board that barely had support for in Linux, said newby just
getting out of school, totaly amazed at the fact that you could actually
run something else than Windows on a PC, let alone that something else
than PCs even existed...
(Well that newbie has grown since, and is now quite a capable Linux guru).
Post by Rob Landley
And I'm at the end of this menu, so I'll pause here for now.  (And
you were
Post by Rob Landley
apologizing for writing a long message... :)
I knew you would surpass me in this respect! ;-P
( Woohoo! I caught up with your mails! :-) )
Regards,
Yann E. MORIN.
--
.-----------------.--------------------.------------------.--------------------.
|  Yann E. MORIN  | Real-Time Embedded | /"\ ASCII RIBBON | Erics'
conspiracy: |
| +0/33 662376056 | Software  Designer | \ / CAMPAIGN     |  ___
|
| --==< ^_^ >==-- `------------.-------:  X  AGAINST      |  \e/
There is no  |
| http://ymorin.is-a-geek.org/ | _/*\_ | / \ HTML MAIL    |   v
conspiracy.  |
`------------------------------^-------^------------------^--------------------'
--
For unsubscribe information see http://sourceware.org/lists.html#faq
--
For unsubscribe information see http://sourceware.org/lists.html#faq
--
Mark A. Miller
***@mirell.org

"My greatest strength, I guess it would be my humility. My greatest
weakness, it's possible that I'm a little too awesome" - Barack Obama

--
For unsubscribe information see http://sourceware.org/lists.html#faq
Yann E. MORIN
2009-04-08 16:51:03 UTC
Permalink
Hello All!
Post by Mark A. Miller
On Mon, Apr 6, 2009 at 7:11 PM, Stefan Hallas Andersen
Post by Stefan Hallas Andersen
It sounds to me like you're problem with crosstool-ng is that it isn't
firmware-linux and you're now using the crosstool-ng mailing list to
promote your own design and tool.
As Rob mentioned, it is the crossgcc mailing list,
Yes, this is true.
Post by Mark A. Miller
which has been
mostly abandoned and Yann got approval to change the mailing list name
evidently to crosstool-ng, but still isn't a *proper* crosstool-ng
mailing list.
I didn't get approval to change it to a crosstool-NG dedicated mailing
list. It just happens that crosstool-NG builds cross-compilers based on
gcc, so it is suited to discuss it here.

And that will be my last post in this thread. I think we've had enough
/information/ ;-) on the subject. I'll use it to find places of improvement
in crosstool-NG, and I even know of a few just of the top of my head.

Thank you all for your intervening, and bearing with us. ;-)

Regards,
Yann E. MORIN.
--
.-----------------.--------------------.------------------.--------------------.
| Yann E. MORIN | Real-Time Embedded | /"\ ASCII RIBBON | Erics' conspiracy: |
| +0/33 662376056 | Software Designer | \ / CAMPAIGN | ___ |
| --==< ^_^ >==-- `------------.-------: X AGAINST | \e/ There is no |
| http://ymorin.is-a-geek.org/ | _/*\_ | / \ HTML MAIL | v conspiracy. |
`------------------------------^-------^------------------^--------------------'


--
For unsubscribe information see http://sourceware.org/lists.html#faq
Rob Landley
2009-04-09 01:38:21 UTC
Permalink
Post by Mark A. Miller
On Mon, Apr 6, 2009 at 7:11 PM, Stefan Hallas Andersen
I believe the discussion amounted to how a cross-compiler was built,
not the red herring of, "Ignore cross-compiling entirely and do native
compilation!" That is where they fork in totally different directions.
But as for building cross-compilers, they're still quite related.
Actually, the discussion clarified for me that they're not as related as I
thought. The goals of the two projects are very different.

Crosstool-NG seems to be aimed at reverse engineering the specific cross
compiler variants used to build existing binary root filesystems, so you can
extend those root filesystems without replacing any of the existing binaries,
or having to statically link your new additions.

For FWL, my only goal was "support reasonably efficient execution on this
target hardware". One of my design assumptions was that you were going to
either build a fresh filesystem entirely from source, or statically link any
additions to existing (subtly incompatible) filesystems. This let me
eliminate an awful lot of complexity which Yann has to face head on.

So with crosstool-NG, you can't really ask "what's the toolchain for PPC 440"
because it's capable of producing over 100 of them _just_for_ppc_440_. (4
binutils versions, times 9 gcc versions, times 16 Linux versions, without
even enabling the obsolete or experimental options. Then there's whether to
target Linux or bare metal, whether to use sjlj exceptions or dwarf2, and so
on.)

My project assumes that "what's the toolchain for PPC 440" should have a
simple answer, including a URL where you can download the prebuilt binary.
Post by Mark A. Miller
I've mentioned it to Yann personally, but I'll state it on the list, I
have no issue with him personally, and as for crosstool-ng, it's a
package that he coded in his free time for other people, so criticisms
can only go so far. (Why didn't you do X?!, et cetera). But I think
his cross-compiler could be done better, and that's why he and Rob
decided to have their discussion public.
Just about everything can be improved. I think we've added enough items to
his TODO list for one week. :)
Post by Mark A. Miller
Post by Stefan Hallas Andersen
Where this not supposed to be a crosstool-ng design discussion thread
instead of a >> who can write the longest emails competition << ?
Yann asked Rob as to his opinions, and as opinions go, they're wordy.
So are mine.
I sort of started it. Yann emailed me in response to my March 7 blog entry,
which was largely about my first reactions to crosstool-ng:

http://landley.net/notes-2009.html#07-03-2009

He said he only catches up on my blog about once a month, so it took him a
while to notice I'd mentioned his stuff. We emailed back and forth a bit
privately, and I suggested that we have the discussion on this list in case
anybody else wanted to contribute to it or found it interesting. (In
retrospect, that part was apparently a mistake.)

I think the thread's pretty much wound down now...

Rob
--
GPLv3 is to GPLv2 what Attack of the Clones is to The Empire Strikes Back.

--
For unsubscribe information see http://sourceware.org/lists.html#faq
Thomas Charron
2009-04-11 04:14:33 UTC
Permalink
Post by Rob Landley
Crosstool-NG seems to be aimed at reverse engineering the specific cross
compiler variants used to build existing binary root filesystems, so you can
extend those root filesystems without replacing any of the existing binaries,
or having to statically link your new additions.
And those of us who are caring about bare metal?
Post by Rob Landley
So with crosstool-NG, you can't really ask "what's the toolchain for PPC 440"
because it's capable of producing over 100 of them _just_for_ppc_440_.  (4
binutils versions, times 9 gcc versions, times 16 Linux versions, without
even enabling the obsolete or experimental options.  Then there's whether to
target Linux or bare metal, whether to use sjlj exceptions or dwarf2, and so
on.)
My project assumes that "what's the toolchain for PPC 440" should have a
simple answer, including a URL where you can download the prebuilt binary.
There is no single toolchain. That's an assumption that works for
*your* environment.
--
-- Thomas

--
For unsubscribe information see http://sourceware.org/lists.html#faq
Rob Landley
2009-04-11 05:13:47 UTC
Permalink
Post by Thomas Charron
Post by Rob Landley
Crosstool-NG seems to be aimed at reverse engineering the specific cross
compiler variants used to build existing binary root filesystems, so you
can extend those root filesystems without replacing any of the existing
binaries, or having to statically link your new additions.
And those of us who are caring about bare metal?
I've used a jtag to install a bootloader and linux kernel on bare metal, so
presumably you mean you want to build something other than linux system to
install on that bare metal? (Such as building busybox against
newlib/libgloss?)

How does this differ from building a very complicated bootloader, or linking
against a different C library? (If you're building a complete new system on
the bare metal, do you particularly care about binutils or gcc versions other
than "fairly recent"?)
Post by Thomas Charron
Post by Rob Landley
So with crosstool-NG, you can't really ask "what's the toolchain for PPC
440" because it's capable of producing over 100 of them
_just_for_ppc_440_.  (4 binutils versions, times 9 gcc versions, times 16
Linux versions, without even enabling the obsolete or experimental
options.  Then there's whether to target Linux or bare metal, whether to
use sjlj exceptions or dwarf2, and so on.)
My project assumes that "what's the toolchain for PPC 440" should have a
simple answer, including a URL where you can download the prebuilt binary.
There is no single toolchain. That's an assumption that works for
*your* environment.
Could be, but I still don't understand why. Care to explain?

I've built many different strange things with generic-ish toolchains, and
other than libgcc* being evil without --disable-shared, swapping out the
built-in libraries and headers after the toolchain is built is fairly
straightforward (for C anyway). You can do it as a wrapper even.

(I suppose you could be referring to languages other than C? C++ is a bit
more complicated, but then it always is. Java is its own little world, but
they put a lot of effort into portability. Haven't poked at Fortran since
the 90's.)

Rob
--
GPLv3 is to GPLv2 what Attack of the Clones is to The Empire Strikes Back.

--
For unsubscribe information see http://sourceware.org/lists.html#faq
Thomas Charron
2009-04-11 05:26:30 UTC
Permalink
Post by Rob Landley
  And those of us who are caring about bare metal?
I've used a jtag to install a bootloader and linux kernel on bare metal, so
presumably you mean you want to build something other than linux system to
install on that bare metal?  (Such as building busybox against
newlib/libgloss?)
I'm talking about bare metal. Typically, these systems have no more
RAM then is present on the processor. Like, 64k. There is no
bootloader, no busybox, and most specifically, no OS.
Post by Rob Landley
How does this differ from building a very complicated bootloader, or linking
against a different C library?  (If you're building a complete new system on
the bare metal, do you particularly care about binutils or gcc versions other
than "fairly recent"?)
Yes. Since GCC is generally tested on 'real' systems, some versions
perform different then others depending on the target processor
itself. In some cases, occasionally a version of GCC simply won't
work at all for a given processor.
Post by Rob Landley
  There is no single toolchain.  That's an assumption that works for
*your* environment.
Could be, but I still don't understand why.  Care to explain?
See above. bare metal *ISN"T* running Linux on a small box. It's
another beast entirely.
Post by Rob Landley
I've built many different strange things with generic-ish toolchains, and
other than libgcc* being evil without --disable-shared, swapping out the
built-in libraries and headers after the toolchain is built is fairly
straightforward (for C anyway).  You can do it as a wrapper even.
(I suppose you could be referring to languages other than C?  C++ is a bit
more complicated, but then it always is.  Java is its own little world, but
they put a lot of effort into portability.  Haven't poked at Fortran since
the 90's.)
Specifically, I've been working with a mashed in version of newlib,
and newlib-lpc. In those cases, you actually don't use the GNU C
library at all. Additionally, you can use newlib and gcc to compile
C++ applications (however, no STL, etc support). I also have a small
side project to try to move the uclibc++ libraries to bare metal.
--
-- Thomas

--
For unsubscribe information see http://sourceware.org/lists.html#faq
Rob Landley
2009-04-12 00:13:49 UTC
Permalink
Post by Thomas Charron
Post by Rob Landley
  And those of us who are caring about bare metal?
I've used a jtag to install a bootloader and linux kernel on bare metal,
so presumably you mean you want to build something other than linux
system to install on that bare metal?  (Such as building busybox against
newlib/libgloss?)
I'm talking about bare metal. Typically, these systems have no more
RAM then is present on the processor. Like, 64k. There is no
bootloader, no busybox, and most specifically, no OS.
Yeah, I've encountered those, and written code for them. Often in assembly,
since with those constraints you need every byte, and thus you haven't got
the luxury of coding in C, so I'm not sure how it's relevant here. (Last I
checked, gcc only supported 32 bit and higher targets as a policy decision,
which rules out the z80 and such.)

I'm impressed by the way the OpenBios guys use C code directly from ROM before
the DRAM controller is set up. They put the CPU cache into direct mapped
writeback mode, zero the first few cache lines, set the stack pointer to the
start of the address range they just dirtied, and then jump to C code and
make sure to never touch ANY other memory until the DRAM controller is up and
stabilized. I.E. they're using the cache as their stack, so they can
initialize the dram controller from C code instead of having to do it in
assembly.

Neat trick, I thought. But they don't consider it cross compiling, any more
than the linux bootup code to set up page tables and jump from 16 bits to 32
bits was cross compiling...
Post by Thomas Charron
Post by Rob Landley
How does this differ from building a very complicated bootloader, or
linking against a different C library?  (If you're building a complete
new system on the bare metal, do you particularly care about binutils or
gcc versions other than "fairly recent"?)
Yes. Since GCC is generally tested on 'real' systems, some versions
perform different then others depending on the target processor
itself.
Some variants of gcc are broken, yes.
Post by Thomas Charron
In some cases, occasionally a version of GCC simply won't
work at all for a given processor.
Yes, that's why you need a version of GCC that's capable of outputting code
for the processor. (So how do _you_ configure stock gcc+binutils source to
output code for z80 or 8086 targets? Yes, I'm aware of
http://www.delorie.com/djgpp/16bit/gcc/ and I'm also aware it's based on gcc
2.7 and hasn't been updated in 11 years.)
Post by Thomas Charron
Post by Rob Landley
  There is no single toolchain.  That's an assumption that works for
*your* environment.
Could be, but I still don't understand why.  Care to explain?
See above. bare metal *ISN"T* running Linux on a small box. It's
another beast entirely.
Yes, I know. You can't natively compile on a target that can't run a
compiler. I agree. How does that mean you need more than one compiler
targeting the same hardware?

I think you're confusing two different points I've made. The first is "You
should be able to have a usable somewhat generic cross compiler for a given
target architecture" with "When your target is Linux you should be able to
build natively under emulation, and thus avoid cross compiling." Those are
two completely different arguments.

(And the second argument never claimed that the emulator and the target
hardware you actually deployed would have exactly the same hardware, any more
than Ubuntu's build servers and my laptop have exactly the same hardware. If
I build a boot floppy image from my laptop, am I cross compiling? But it's
still a different argument.)
Post by Thomas Charron
Post by Rob Landley
I've built many different strange things with generic-ish toolchains, and
other than libgcc* being evil without --disable-shared, swapping out the
built-in libraries and headers after the toolchain is built is fairly
straightforward (for C anyway).  You can do it as a wrapper even.
(I suppose you could be referring to languages other than C?  C++ is a
bit more complicated, but then it always is.  Java is its own little
world, but they put a lot of effort into portability.  Haven't poked at
Fortran since the 90's.)
Specifically, I've been working with a mashed in version of newlib,
and newlib-lpc. In those cases, you actually don't use the GNU C
library at all.
Yes, that would be the swapping out the built-in libraries and headers part,
above. This can be done after the compiler is built with a fairly simple
wrapper. (If gcc wasn't constructed entirely out of unfounded assumptions,
you wouldn't even need to wrap it.)

C Compilers only have a half dozen interesting search paths. The four
nontrivial ones are two #include paths (one for the compiler's built-in
headers ala stdarg.h, and one for the system headers), and two library paths
(one for the compiler's built-in libraries ala libgcc, and one for the system
libraries). The two trivial ones are the search $PATH to find the linker
and assembler and so on, and the files specified on the command line (which
is why it cares about the current directory).

This is a slight oversimplification, and assumes you can find crt1.o and
friends in the library search path. It also assumes that if you have a
non-elf output format you'll either supply your own linking tools or have a
tool that converts an ELF file into your desired format (such as binflat or
the kernel's various zImage generators). But generally these days, those
assumptions are true.

Doing "gcc hello.c" on a gcc built with --disable-shared actually works out to
a command line something like:

gcc -nostdlib -Wl,--dynamic-linker,/lib/ld-uClibc.so.0 \
-Wl,-rpath-link,/path/to/lib -L/path/to/lib -L/path/to/gcc/lib \
-nostdinc -isystem /path/to/include -isystem /path/to/gcc/include \
/path/to/lib/crti.o /path/to/gcc/lib/crtbegin.o /path/to/lib/crt1.o \
hello.c -lgcc -lc -lgcc /path/to/gcc/lib/crtend.o /path/to/lib/crtn.o

That's telling gcc "no, you actually _don't_ know where anything is, here it
all is explicitly". (Yes, -lgcc is in there twice. Long story.)

The builds of the linux kernel and uClibc already do this, they feed -nostdinc
and --nostdlib to the compiler and then explicitly feed in the header files
and library paths they want. (The tricky part is that libgcc_s.so is
horrible... but it's also optional, and the reason the "--static-libgcc" flag
exists.)

It's much easier to get the behavior you want out of the compiler if you
understand what the compiler is actually _doing_. Unfortunately gcc does not
make this easy, but in theory the compiler just bounces off of libc.so to
find libc.so.6 or libc.so.0 or newlib or what, and adds some .o files when
linking an executable. It shouldn't care at build time what C library it's
using, and in fact I don't actually build and install uClibc until _after_
I've built binutils and gcc. (Yes gcc cares about things it shouldn't, but
you can whack it on the nose with a rolled up newspaper until it stops.)

This should all be simple and straightforward. Sometimes it isn't, but it can
be _fixed_.
Post by Thomas Charron
Additionally, you can use newlib and gcc to compile
C++ applications (however, no STL, etc support). I also have a small
side project to try to move the uclibc++ libraries to bare metal.
Sounds interesting. I'm sure Garrett would love to see your patches when
you're done.

Rob
--
GPLv3 is to GPLv2 what Attack of the Clones is to The Empire Strikes Back.

--
For unsubscribe information see http://sourceware.org/lists.html#faq
Thomas Charron
2009-04-12 13:38:30 UTC
Permalink
  I'm talking about bare metal.  Typically, these systems have no more
RAM then is present on the processor.  Like, 64k.  There is no
bootloader, no busybox, and most specifically, no OS.
Yeah, I've encountered those, and written code for them.  Often in assembly,
since with those constraints you need every byte, and thus you haven't got
the luxury of coding in C, so I'm not sure how it's relevant here.  (Last I
checked, gcc only supported 32 bit and higher targets as a policy decision,
which rules out the z80 and such.)
Saying '32 bit' doesn't really cover the full range. You can have a
32 bit instruction pointer, but only a 16 bit data pointer, or vice
versa. However, your inference here is clear, and is wrong. There
are many 32 bit processors you cannot run Linux on, nor would you want
to.
I'm impressed by the way the OpenBios guys use C code directly from ROM before
the DRAM controller is set up.  They put the CPU cache into direct mapped
writeback mode, zero the first few cache lines, set the stack pointer to the
start of the address range they just dirtied, and then jump to C code and
make sure to never touch ANY other memory until the DRAM controller is up and
stabilized.  I.E. they're using the cache as their stack, so they can
initialize the dram controller from C code instead of having to do it in
assembly.
Neat trick, I thought.  But they don't consider it cross compiling, any more
than the linux bootup code to set up page tables and jump from 16 bits to 32
bits was cross compiling...
Welcome to the world of the embedded. That sort of initialization
is commonplace. But you're only seeing the tip of the iceburg there.
Most times on embedded boards I'm working on, there IS no additional
RAM. However, what I need to use that 60Mhz processor for is
complicated, and coding it in assembly would be complex.

As an example, one specific project I've been involved with will
control up to 28 stepper motors microstepping at 256 us/s, while
performing operations against several other peripherals connected via
SPI.

Your typically running from flash directly.
  Yes.  Since GCC is generally tested on 'real' systems, some versions
perform different then others depending on the target processor
itself.
Some variants of gcc are broken, yes.
In some cases, occasionally a version of GCC simply won't
work at all for a given processor.
Yes, that's why you need a version of GCC that's capable of outputting code
for the processor.  (So how do _you_ configure stock gcc+binutils source to
output code for z80 or 8086 targets?  Yes, I'm aware of
http://www.delorie.com/djgpp/16bit/gcc/ and I'm also aware it's based on gcc
2.7 and hasn't been updated in 11 years.)
Again, you're making a huge assumption that simply isn't valid. If
you'd like to know how I can use gcc for, say, AVR32, motorola 64k,
MIPS, ARM7/9/Cortex, then we can talk. Here are some examples of some
boards (most are simply links to Olimex boards, cheap and easy to find
links. :-D ):

http://www.olimex.com/dev/avr-gsm.html
http://www.olimex.com/dev/tms320-p28016.html
http://www.olimex.com/dev/lpc-p2129.html
http://www.olimex.com/dev/stm32-h103.html
http://www.olimex.com/dev/stm32-103stk.html
http://www.coridiumcorp.com/ARMduino.php?gclid=CPPX-aO165kCFSAhDQodMlKhRg
Post by Rob Landley
Could be, but I still don't understand why.  Care to explain?
  See above.  bare metal *ISN"T* running Linux on a small box.  It's
another beast entirely.
Yes, I know.  You can't natively compile on a target that can't run a
compiler.  I agree.  How does that mean you need more than one compiler
targeting the same hardware?
Yes. Same target does NOT mean the same hardware.
I think you're confusing two different points I've made.  The first is "You
should be able to have a usable somewhat generic cross compiler for a given
target architecture" with "When your target is Linux you should be able to
build natively under emulation, and thus avoid cross compiling."  Those are
two completely different arguments.
(And the second argument never claimed that the emulator and the target
hardware you actually deployed would have exactly the same hardware, any more
than Ubuntu's build servers and my laptop have exactly the same hardware.  If
I build a boot floppy image from my laptop, am I cross compiling?  But it's
still a different argument.)
Again, you're misunderstanding. I'm saying you've made point which
simply aren't valid to me, an actual user of crosstoll-NG. They
simply aren't relevant. I may need two distinct compilers for the
exact same processor target.
  Specifically, I've been working with a mashed in version of newlib,
and newlib-lpc.  In those cases, you actually don't use the GNU C
library at all.
Yes, that would be the swapping out the built-in libraries and headers part,
above.  This can be done after the compiler is built with a fairly simple
wrapper.  (If gcc wasn't constructed entirely out of unfounded assumptions,
you wouldn't even need to wrap it.)
Then you should REALLY call up the GCC guys and make sure they know
that --with-newlib was a reeeaaaally bad idea so they can make lickity
split and fix their obvious error in thinking.
--
-- Thomas

--
For unsubscribe information see http://sourceware.org/lists.html#faq
Rob Landley
2009-04-07 10:02:05 UTC
Permalink
Post by Yann E. MORIN
Rob,
All,
OK, I think I'm coping with the back-log ;-)
Post by Rob Landley
Post by Yann E. MORIN
2.b) Ease configuration of the toolchain
In the state, configuring crosstool required editing a file containing
shell variables assignements. There was no proper documentation at what
variables were used, and no clear explanations about each variables
meaning.
My response to this problem was to write documentation.
Sure. But documentation is not all. Fixing and enhancing both the code
and the configuration scheme is also a way to achieve a better end-user
experience (Ahaha! I've wandered too much with the marketting dept lately).
Post by Rob Landley
While I've used kconfig myself, there's an old saying: "If all you have
is a hammer, everything looks like a nail".
Hehe! Show me a better and simpler way.
My attempt to do so was why I brought my build system up. Not going there
now.
Post by Yann E. MORIN
Post by Rob Landley
The failure mode of kconfig is having so much granularity
So what? Should I restrict what the end-user is allowed to do, based solely
on my own experience?
First of all, I never assume I'm going to think of everything the end user is
going to want to do. I'm not that smart, or that crazy.

Secondly, I point out that when it comes down to it, crosstool-ng is more or
less a series of shell scripts.

I chose a simple shell script implementation so that if somebody really wanted
to tweak what it did, they could just go in and edit said shell script. "If
all else fails, it's just a shell script." (In fact, reading through my
shell scripts probably takes considerably less time than reading through my
documentation.) Obviously that's not a first resort, but it gave me a
threshold for "at what point is it collectively less work for the 0.1% of
people who will ever care about this to go in and edit the dumb script than
it is for the other 99.9% to figure out what this option _means_ and that
they don't need to do it."

Making crosstool-ng's scripts simple, easy to read, well-documented, and
generally easy to edit means that menuconfig isn't the ONLY way people can
get the behavior they want out of them, either. (Whether you intended that
or not. :) You are giving your users the complete source code to your build
system.

Also, on "-pipe" specifically, why not just give them a $CFLAGS that's fed in
to every build, which they can set in menuconfig? What I mean is, why limit
it to -pipe, or treat that option specially?
Post by Yann E. MORIN
If the possibility exists in the tools, then why
prevent the user from using (or not using) that? Configuring a compiler
has soooo many options, it drives you mad.
As the lawyers would say, "asked and answered".
Post by Yann E. MORIN
Quite a lot can be inferred
from higher-level options, such as the arch name and the CPU variant, or
the ABI...
Assuming you've got a year's experience with cross compiling, sure.

Ok, standard usability exercise, ala Alan Cooper's "The Inmates are Running
the Asylum" or Don Norman's "The Design of Everyday Things".

Imagine you need to build a toolchain for an end user, but you'll have to
configure it for them. You've sat them down in a chair and get to ask them
questions, in person, and you're going to write down the answers on a piece
of paper.

The first questions that come to mind are, probably things like "what target
processor is it for?", "linux or raw elf", "gcc or uClibc", right?

Look at your menuconfig. What's the first questions it asks in the first
menu? "Would you like to use obsolete or experimental features? What
directory should I save the tarballs I download in?"

A reasonable amount of newbie anguish might be spared just by moving that menu
down to just above "debug facilities". The next menus three are a good
start, although why "GMP and MPFT" is at the top level, I have no idea.
(These are gcc 4.3 implementation details, aren't they? Presumably they have
help entries now, but shouldn't they be under the gcc menu or something?)

It might make sense to have a "packages" menu, containing binutils, C
compiler, C library, and tools menus. Although the tools menu with libelf
and sstrip in it is kind of inexplicable; when do you need that and why?

Also, a newbie should probably start with a preconfigured sample toolchain
their first time through, but the README never mentions those, and when you
run ct-ng with no arguments in an 80x25 screen it scrolls off, if you pipe it
to less and start reading you see "menuconfig" and start investigating that
without reading through the rest. Once you're in menuconfig, there's no hint
of the

I read the README all the way through and looked at the
initial ./configure --help fairly closely, but by the time I got to ct-ng and
saw it did menuconfig I skipped straight to reading the menuconfig help and
thus setting up a toolchain build the slow way. (I did at one point
find . -name "*_defconfig" and it hit all the stuff in
targets/src/linux-*...)

Oh, and under "target optimizations", a little judicious renaming might be
nice.

"Generate code for which ABI"
"Emit assembly for which CPU"

(When I saw "Emit assembly for CPU" I went "um, yes please?" and had to read
the help to have a clue what it was on about.)

It might help to indent "Tune for which CPU" under "Emit assembly for which
CPU", since it is a sub-option.
Post by Yann E. MORIN
Post by Rob Landley
Ironically, kconfig is only really worth using when you have enough
config options to bother with it.
But there *are* a lot of options! There isn't even all that could be
available!
I agree there are a lot of options and could be more.
Post by Yann E. MORIN
Post by Rob Landley
When you have small numbers of config options
that are usually going to be off, I prefer environment variables (with a
config file in which you can set those in a persistent manner) or command
line options. Since you can set an environment variable on the command
FORK=1 ./buildall.sh
I lean towards those. Possibly a matter of personal taste...
I think so. To configure stuff, I prefer having a GUI that is not vi, but
that is still simple. and the kconfig language and its mconf interpreter
seems quite fitted, even if it is not the best. But here is no alternative
that I'm aware of. Plus, it is quite well known thanks to the Linux kernel
using it.
Sure. If you're going to use something like kconfig, kconfig is a good
choice. :)
Post by Yann E. MORIN
Post by Rob Landley
Similarly, the "experimental" one seems useless because when you enable
it the experimental versions already say "EXPERIMENTAL" in their
descriptions (wandered around until I found the binutils version choice
menu and looked at it to be sure). They're marked anyway, so why is an
option to hide them an improvement?
EXPERIMENTAL in the prompt is just a string. A config knob makes the user
really aware that he/she's trying something that might break.
They're always trying something that might break. I've broken the "cat"
and "echo" commands before. (And my friend Piggy broke "true" in a way that
took down his system! I bow before him, I can't top that one.)
Post by Yann E. MORIN
Plus, it marks the resulting config file as containing EXPERIMENTAL
features/versions/... and is easier to process.
Sounds like it's there to discourage them from doing it, which I suppose makes
sense.
Post by Yann E. MORIN
Post by Rob Landley
As for the third, wasn't there a debug menu? Why is "Debug crostool-NG"
in the paths menu? (Rummage, rummage... Ah, I see, the debug menu is a
list of packages you might want to build and add to the toolchain. Ok,
sort of makes sense. Still, the third thing a newbie sees going through
in order as a "very very expert" option. Moving on...)
That's why the former in titled "Debug crosstol-NG", while the latter
is titled "debug facilities". Again, maybe the wording is wrong.
I'm trying to figure out a coherent organization.

Under "operating system" you have "Check installed headers", which is another
debug option. Similarly the kernel menu has kernel verbosity but the paths
and misc options menu has general verbosity, and not only are they nowhere
near each other but I dunno what the difference is. (There isn't a "uClibc
verbosity" and it has a similar V=1 logic, if that's what the other option
means which is just a guess. No help available for this kernel option...)
Post by Yann E. MORIN
Post by Rob Landley
() Local tarballs directory (NEW)
(${CT_TOP_DIR}/targets) Working directory (NEW)
(${HOME}/x-tools/${CT_TARGET}) Prefix directory (NEW)
Most users aren't going to care where the local tarballs directory is, or
the working directory.
Most. Not all. And the help entries are here to tel the user whether
it wise to change.
Post by Rob Landley
The "prefix directory" is presumably different from where
we just installed with --prefix.
The help roughly says: the path were the toolchain is expected to run from.
Unfortunately, there is yet no support for DESTDIR, the place where the
toolchain will be installed, to allow installing out-of-tree. For the time
being, the DESTDIR is plainly / that is, the toolchain is expected to run
on the system it is built on. But that should eventually be fixed.
I used a wrapper. Not going there.
Post by Yann E. MORIN
Post by Rob Landley
I suppose it's nice that you can override
the defaults, but having it be one of the first questions a user's posed
with when going through the options in order trying to configure the
thing isn't really very helpful. It's not my problem, just _work_.
Where should I install the toolchain? In the user's home directory?
This is indeed the default, but you are pestering against it!
/me is totally spoiled by automatically relocatable toolchains, pleads the
5th.
Post by Yann E. MORIN
If not in ${HOME}, where should I install the toolchain? In /opt ?
In /usr/local ? Bah, most useres don't have right access there.
Except root. But building as root is asking for problems!
I forgot how big an issue this is for unwrapped gcc's.
Post by Yann E. MORIN
Post by Rob Landley
(I also don't know
what CT_TOP_DIR and CT_TARGET are, I'd have to go look them up.)
docs/overview.txt is advertised in the top-level README.
I read through the introduction and history parts of that file fairly
dilligently, which consumed most of the attention span I had devoted to it.
Then I got to the point where it told me to install and I went and did that,
spending half an hour or so fighting with ./configure for and installing the
various prerequisite packages. Then I came back to the overview and skimmed
a bit more until it told me about menuconfig, at which point I went off and
read menuconfig help for a long time, started reading the project's source
code while I was doing that to see what some of the config symbols actually
affected, and never really came back to the overview file. (Skimmed over it
a couple times, but didn't read it closely past the first couple hundred
lines.)

(I didn't say I didn't believe I _could_ look up those symbols, just that I
didn't know what they did off the top of my head.)

Part of what I'm reacting to here is the experience the project presents to
new users. Once you've been using something for six months you can get used
to almost anything, but learning it the first time can be really frustrating.
Post by Yann E. MORIN
Post by Rob Landley
For comparison, my system creates a tarball from the resulting cross
compiler, and leaves an extracted copy as "build/cross-compiler-$ARCH".
You can put them wherever you like, it's not my problem. They're fully
relocatable.
Toolchains built with crosstool-NG are also fully relocatable.
Having the user tell before hand where to install the stuff is also
another good option.
So why were you talking about where to install it being a big deal earlier?

I'm confused.
Post by Yann E. MORIN
Post by Rob Landley
[*] Remove documentation (NEW)
Nice, and possibly the first question someone who _isn't_ a cross
compiler toolchain developer (but just wants to build and use the thing)
might actually be interested in.
:-)
Post by Rob Landley
Your ./configure still requires you to install makeinfo no matter what
this is set to. You have to install the package so this can delete its
output?
Unfortunately, gcc/glibc/... build and install their documentation by
default. I haven't seen any ./configure option that would prevent them
from doing so... :-(
You already mentioned the breakage in some version of some package that didn't
have makeinfo.
Post by Yann E. MORIN
Post by Rob Landley
Wouldn't it be better to group this with a "strip the resulting binaries"
option, and any other space saving switches? (I'm just assuming you
_have_ them, somewhere...)
Nope. But that's a good idea. :-)
Post by Rob Landley
[*] Render the toolchain read-only (NEW)
This is something the end user can do fairly easily for themselves, and
I'm not quite sure what the advantage of doing it is supposed to be
anyway. In any case it's an install option, and should probably go with
other install options, but I personally wouldn't have bothered having
this option at all.
In fact, it's here and ON by default, and why this is so is clearly
explained both in docs/overview.txt and in the help of this option. Well,
that might not be so obvious, after all. :-(
The complete help entry in 1.3.2:

│ CT_INSTALL_DIR_RO:

│ Render the directory of the toolchain (and its sub-directories)
│ read-only.

│ Usefull for toolchains destined for production.

Presumably the "so clearly explained" went into 1.3.3 or svn. (If you install
it as root in /usr or some such, normal users shouldn't be able to write to
it because they don't own it. That would be "production", no?)

Is some portion of the toolchain by default world writeable?

The overview file suggests the user will accidentally install stuff into it
otherwise, apparently because users are accident prone like that.

I've known to intentionally install stuff like zlib headers and libraries into
their cross compiler because they built and installed zlib into their target
root filesystem, and they then wanted to cross compile more stuff that used
zlib, and needed to link against its headers and libraries at compile time.
But they did this quite intentionally...
Post by Yann E. MORIN
Post by Rob Landley
(10) connection timeout (NEW)
This is an implementation detail. Users should hardly ever care.
No. I have a case were the network is sooo slow that connections are
established well after the 10s default timeout (17s if I remember well).
That's why I used a default of 20, with one retry. Still tolerable for
waiting humans, but enough of a gap that systems that don't respond in time
usually are down.
Post by Yann E. MORIN
Post by Rob Landley
As a higher level design issue, It would have been easier for me to
implement my build system in python than in bash, but the point of doing
it in bash is it's the exact same set of commands you'd run on the
command line, in the order you'd run them, to do this yourself by hand.
So to an extent the shell scripts act as documentation and a tutorial on
how to build cross compilers. (And I added a lot of #comments to help out
there, because I _expect_ people to read the scripts if they care about
much more than just grabbing prebuilt binary tarballs and using them to
cross compile stuff.)
That paragraph also applies to crosstool-NG. Word for word, except for
the python stuff, that I don't grok.
So my earlier point about not needing to expose sufficiently obscure config
entries via menuconfig when they can just tweak the shell scripts should
apply to crosstool-ng as well?
Post by Yann E. MORIN
Post by Rob Landley
[ ] Stop after downloading tarballs (NEW)
This seems like it should be a command line option.
Granted, same answer as for "Force downloads"
Post by Rob Landley
[ ] Force extractions (NEW)
Ah, you cache the results of tarball extraction too. I hadn't noticed.
(I hadn't bothered to mention that mine's doing it because it's just an
implementation detail.)
This is one of the things my setupfor function does: it extracts source
into build/sources, in a subdirectory with the same name as the package.
Unfortunately not all package are good boys. Some have a hyphen in the
package name and a dash in the corresponding directory.
Yup, I noticed that. Hence the "extract into empty directory and mv *" trick.
Post by Yann E. MORIN
Post by Rob Landley
Again, I detect "good/stale" cached data via sha1sums.
I'm missing this. It's on my TODO as well, but low priority...
It bit me personally often enough that I did something about it.

I did already write generic code to do this, you know. Available under GPLv2
if any of it's of interest to you. (Yes, I did read the whole of LICENSES.)
Post by Yann E. MORIN
Post by Rob Landley
(1) Number of parallel jobs (NEW)
My sources/includes.sh autodetects the number of processors and sets
CPUS. You can override it by setting CPUS on the command line. (I often
do "CPUS=1 ./build.sh x86_64" when something breaks so I get more
understandable error messages.)
In general, I try never to ask the user for information I can autodetect
sane defaults for, I just let them override the defaults if they want to.
At work, we have a build farm whose purpose is to build the firmwares for
our targets. The machines are quad-CPUs. Deploying a new toolchain needs
not be done in the snap of fingers, and building the firmwares have the
priority. So I use that to restrain the number of jobs to run in // .
I usually run the builds as "nice 20" when they're not a priority, but the
ability to easily override the autodetected CPU value is also there.
Autodetecting a default doesn't mean you can't specify...
Post by Yann E. MORIN
Post by Rob Landley
(0) Maximum allowed load (NEW)
Ooh, that's nice, and something mine doesn't have.
Yeah! One good point! :-)
Post by Rob Landley
Personally I've never had
a clear enough idea of what loadavg's units were to figure out how it
equated to slowing down my desktop, and I've actually found that my
laptop's interactivity going down the drain is almost never due to
loadavg, it's due to running out of memory and the thing going swap happy
with the disk pegged as constantaly active. (The CPU scheduler is way
the heck better than the I/O scheduler, and virtual memory is
conceptually horrible and quite possibly _never_ going to be properly
fixed at the theoretical level. You have to accurately predict the
future in order to do it right, that's slightly _worse_ than solving the
halting problem...)
(0) Nice level (NEW)
I already have # of // jobs, and loadavg. Why not having nice as well?...
See, when that kind of question comes up, I go the other way. :)

(I've had many evenings devoted to "This is too complicated, how can it be
simplified? Can I automate something away, or avoid needing it entirely, or
group things and have something generic pop out, or...?" You've seen the
kind of projects this attracts me to...)
Post by Yann E. MORIN
Post by Rob Landley
[*] Use -pipe (NEW)
Why would you ever bother the user with this? It's a gcc implementation
detail, and these days with a modern 2.6 kernel dentry and page caches
you probably can't even tell the difference in benchmarks because the
data never actually hits the disk anyway.
Have you actually benchmarked the difference?
10% gain on my machine (Dual AMD64, 1GiB RAM) at the time of testing.
Much less now I have a Quad-core with 4GiB RAM.
Cool. (And possibly a good thing to put in the help text.)

So what's the downside of just having it on all the time?
Post by Yann E. MORIN
Post by Rob Landley
[ ] Use 'ash' as CONFIG_SHELL (NEW)
A) I haven't got /bin/ash installed. Presumablly you need to install it
since the help says it's calling it from an absolute path?
crosstool-NG does not build and install it. If the user wants that,
he/she's responsible for installing it, yes. Maybe I should build my own...
Post by Rob Landley
B) If your scripts are so slow that you need a faster shell to run them,
possibly the problem is with the scripts rather than with the shell?
My scripts are not so slow. ./configure scripts and Makefiles are.
Again, using dash, the build went 10%/15% faster on my quad-core.
Configure is enormously slow, yes. Is there a downside to autodetecting
that /bin/ash or /bin/dash is installed and using it if it is?

Also, why is this a yes/no instead of letting them enter a path to
CONFIG_SHELL (so they can use /bin/dash or /bin/ash if they want)?
Post by Yann E. MORIN
Post by Rob Landley
I admit that one of the potential weaknesses of my current system is that
it calls #!/bin/bash instead of #!/bin/sh. I agonized over that one a
bit. But I stayed with bash because A) dash is seriously broken, B) bash
has been the default shell of Linux since before the 0.0.1 release.
I do explicitly call bash as well, because I use bashisms in my scripts.
./configure and Makefiles should be POSIX compliant. I am not.
Post by Rob Landley
Maximum log level to see: (INFO) --->
I don't have a decent idea of what you get with each of these. (Yes,
I've read the help.)
OK, that may require a litle bit more explanations in the help.
I don't care about the components build log. I just want to know whether
the build was successful, or failed. In crosstool-NG the messages are
ERROR : crosstool-NG detected an error: failed download/extract/patch/...
incorrect settings, internal error... ERRORs are fatal
WARNING : non-fatal condition that crosstool-NG knows how to work around,
but if you better give it correct input, rather than letting it
guess.
Typo, "but it's better if you give it". (If you're going to use that as new
help text...)
Post by Yann E. MORIN
INFO : informs the user of the overall process going on. Very terse.
Tells the current high-level step being done: doanloading,
extracting, building a component...
EXTRA : informs the user with a finer level of what's going on. For
each steps listed above, prints sub-sequences: package being
./configure-ing, make-ing, installing...
DEBUG : messages aimed at debugging crosstool-NG's behavior.
ALL : print everything: ./configure output, make output...
Is this a simple increasing scale of detail like the kernel's log levels, or
does "DEBUG" output things that "ALL" doesn't?
Post by Yann E. MORIN
Post by Rob Landley
[*] Progress bar (NEW)
I have the "dotprogress" function I use for extracting tarballs, prints a
period every 25 lines of input.
Mine rotates the bar every tenlines. Which is the better? ;-)
Possibly yours, I just didn't care that much. :)
Post by Yann E. MORIN
Post by Rob Landley
I used to change the color of the output so you could see at a glance
what stage it was, but people complained
Right. crosstool-NG's output used to be colored, but that just plainly
sucks. And, believe it or not, some people are still using
non-color-capable terminals...
Somebody actually asked me how to get it to stop changing the title bar (even
though bash changes the title bar right back when your prompt comes up). I
showed them what line to comment out. This would be an example of me not
adding a config entry even though one real person did want to configure
it. :)

I'd like to clarify: these granularity decisions are HARD. Yes I'm
questioning all your menuconfig entries, because I question everything.
Several of the things you're doing you've already come up with an excellent
reason for, it just wasn't obvious from the menuconfig help or a really quick
scan of the source code.
Post by Yann E. MORIN
Post by Rob Landley
[*] Log to a file (NEW)
Again, "./build.sh 2>&1 | tee out.txt". Pretty much programming 101
these days, if you haven't learned that for building all the other source
packages out there, cross compiling probably isn't something you're ready
for.
Don't believe that. I have seen many a newbie asked to build embedded stuff
on an exotic board that barely had support for in Linux, said newby just
getting out of school, totaly amazed at the fact that you could actually
run something else than Windows on a PC, let alone that something else
than PCs even existed...
(Well that newbie has grown since, and is now quite a capable Linux guru).
And I'm happy to explain to them about shell redirection if they ask. :)

There are many, many people who have asked me questions about system building
and got pointed at Linux From Scratch or
http://tldp.org/HOWTO/Bootdisk-HOWTO/ (which is alas getting a bit long in
the tooth now but still a pragmatic introduction to the basics) or
http://tldp.org/HOWTO/From-PowerUp-To-Bash-Prompt-HOWTO.html

In this case,
http://www.gnu.org/software/bash/manual/bashref.html
and
http://www.opengroup.org/onlinepubs/9699919799/idx/shell.html

Might be good references to foist 'em off on... :)

Yet another tangent, of course.
Post by Yann E. MORIN
Post by Rob Landley
And I'm at the end of this menu, so I'll pause here for now. (And you
were apologizing for writing a long message... :)
I knew you would surpass me in this respect! ;-P
( Woohoo! I caught up with your mails! :-) )
Not anymore!

Rob
--
GPLv3 is to GPLv2 what Attack of the Clones is to The Empire Strikes Back.

--
For unsubscribe information see http://sourceware.org/lists.html#faq
Nye Liu
2009-04-07 18:18:26 UTC
Permalink
A few things, from the "end" user perspective.

I have *no interest* in a tool that builds a root filesystem for me. I
am perfectly capable of making my own system, from init on up to ..
whatever I need for the target. it may not even require init.

nor do i need a tool that builds a native compiler: my target may be too
small for that.

What i do want is a tool that pulls the latest compiler/library/binutils
and (deterministically) makes me a set of (relocatable) crosscompler
toolchains for the targets i want.

i pick a version of crosstool-ng, check it into a vendor branch, make
whatever changes i need to build the set of toolchains i want, check
those into my own local repository, and presto, i have a system (under
version control) that i can use to build toolchains and do regression
tests on if i choose to change library/binutils/compiler versions.

The other point about dependencies: i never had a single problem
satisfying crosstools requirements. i apt-get what i need, note what
they are, and i am done.

in this case, automake, gawk, libtool (yes, it sucks), texinfo, zip, and
fastjar for the target i am concerned with.

The final point is about "detecting" build.... you ever compare the
output of:

1) arch
2) uname -m
3) getconf LONG_BIT

particularly on various x86_64/i686 userland/kernel combinations? not to
mention whether "arch" exists at all....



--
For unsubscribe information see http://sourceware.org/lists.html#faq
Rob Landley
2009-04-08 00:21:57 UTC
Permalink
Post by Nye Liu
The final point is about "detecting" build.... you ever compare the
1) arch
2) uname -m
3) getconf LONG_BIT
particularly on various x86_64/i686 userland/kernel combinations? not to
mention whether "arch" exists at all....
Yes, I have. I tend to use the one that the Single Unix Specification
requires to be there:

http://www.opengroup.org/onlinepubs/9699919799/utilities/uname.html

In fact, as a workaround for a specific common source of build bugs I once
wrote my own uname.c implementation that detects when you're running a 32 bit
binary on an x86-64 host and lies about it:

http://landley.net/hg/toybox/file/287bca550748/toys/uname.c

But again, "tangent". :)

(Also, see my earlier rant about programs that care too much about the host.
All this _can_ be made easy. The fact that complicated solutions exist
doesn't mean that easy solutions aren't necessarily possible.)

Rob
--
GPLv3 is to GPLv2 what Attack of the Clones is to The Empire Strikes Back.

--
For unsubscribe information see http://sourceware.org/lists.html#faq
Thomas Charron
2009-04-06 18:11:52 UTC
Permalink
On Sat, Apr 4, 2009 at 2:14 PM, Yann E. MORIN
Post by Yann E. MORIN
This post is to present the overall design of crosstool-NG, how and why
I came up with it, and to eventually serve as a base for an open discussion
on the matter.
After some quick thought, and after mucking around to get things to
work with newlib, there is one real issue with the current
configuration that makes it difficult to add new features.

On one hand, using these seperate build scripts make it fairly easy
to add a simple option within the menu, and have the build script
utilize that. In the case of newlib, it really requires a script for
newlib itself (this is the easy part), but then modifications to the
gcc.sh script, making it no longer really 'stand alone'. For
simplicity, I was almost thinking of splitting off the gcc build
script and perhaps having it use a gcc-newlib.sh script.

Eventually, ucLibc will also run on bare metal, I wonder if the same
issue would happen, where gcc would need to be build differently.
--
-- Thomas

--
For unsubscribe information see http://sourceware.org/lists.html#faq
Continue reading on narkive:
Loading...