Jump to content

Aros/Developer/BuildSystem

From Wikibooks, open books for an open world
Navbar for the Aros wikibook
Aros User
Aros User Docs
Aros User FAQs
Aros User Applications
Aros User DOS Shell
Aros/User/AmigaLegacy
Aros Dev Docs
Aros Developer Docs
Porting Software from AmigaOS/SDL
For Zune Beginners
Zune .MUI Classes
For SDL Beginners
Aros Developer BuildSystem
Specific platforms
Aros x86 Complete System HCL
Aros x86 Audio/Video Support
Aros x86 Network Support
Aros Intel AMD x86 Installing
Aros Storage Support IDE SATA etc
Aros Poseidon USB Support
x86-64 Support
Motorola 68k Amiga Support
Linux and FreeBSD Support
Windows Mingw and MacOSX Support
Android Support
Arm Raspberry Pi Support
PPC Power Architecture
misc
Aros Public License

Overview

[edit | edit source]

AROS uses several custom development tools in its build-system to aid developers by providing an easy means to generate custom makefiles for amigaos like components.

The most important ones are:

  • MetaMake: A make supervisor program. It can keep track of targets available in makefiles available in subdirectories a certain root directory. A more in depth explanation is given below.
  • GenMF: (generate makefile) A macro language for makefiles. It allows to combine several make rules into one macro which can simplify writing makefiles.
  • Several AROS specific tools that will be explained more when appropriate during the rest of this documentation.

MetaMake

[edit | edit source]
Introduction
[edit | edit source]

MetaMake is a special version of make which allows the build-system to recursively build "targets" in the various directories of a project, or even another project.

The name of the makefile's used is defined in the MetaMake config file and defaults to makefile for AROS – so we shall use this name to donate MetaMake Makefiles from here on in.

MetaMake searches directory tree's for mmakefiles – and, for each it finds, process's the metatargets.

You can also specify a program which converts "source" mmakefiles (aptly named mmakefile.src) into proper mmakefile's before MetaMake will be invoked on the created mmakefile.

MetaTargets
[edit | edit source]

MetaMake uses normal makefile syntax but gives a special meaning to a comment line that start with #MM. This line is used to define so called metatargets.

There exist three ways of defining a metatarget in a makefile:

Real MetaTargets
[edit | edit source]
   #MM metatarget : metaprerequisites
       This defines a metatarget with its metaprerquisites:
       When a user asks to build this metatarget, first the metaprerequisites
       will be build as metatargets, and afterwards the given metatarget.
       This form also indicates that in this makefile also a makefile target
       is present with the same name.
   #MM
   metatarget : prerequisites
       This form indicates that the make target on the next line is also a
       metatarget but the prerequisites are not metaprerequisites:
       The line for the definition of a metatarget can be spread over several
       lines if one ends every line with the character and starts the next
       line with #MM.
Virtual MetaTargets
[edit | edit source]
   #MM- metatarget : metaprerequisites
       This is the same definition as for Real MetaTarget's – only now no
       "normal" make target is present in the makefile with the same name as
       the metatarget:
How MetaMake works
[edit | edit source]

MetaMake is run with a metatarget to be built specified on the command line.

MetaMake will first build up a tree of all the mmakefiles present in a directory and all subdirectories (typically from the aros source base directory) – and autogenerate them where applicable. While doing this it will process the mmakefiles and build a tree of all the defined metatargets and their dependencies.

Next it will build all the dependencies (metaprerequisites) needed for the specified metatarget – and finally the metatarget itself.

metaprerequisite are metatarget's in their own rite – and are processed in the same fashion so that dependancies they have are also fulfilled.

For each metatarget, a walk through of all the directories is done – and in every mmakefile where Real MetaTarget's are defined, make is called with the name of the target as a "make target".

Exported variables
[edit | edit source]

When MetaMake calls normal make, it also defines two variables...

  $(TOP) contains the value of the rootdirectory.
  $(CURDIR) contains the path relative to $(TOP).
Autogenerating mmakefile's
[edit | edit source]

Another feature of MetaMake is automatic generation of mmakefile's from a source mmakefile's.

When the directory tree is scanned for mmakefiles, ones with a .src suffix that are newer then any present mmakefile are processed using a specified script that regenerate's the mmakefile from the source mmakefile. The called script is defined in the configuration file.

Examples
[edit | edit source]

The next few examples are taken from the AROS project.

Example 1: normal dependencies
[edit | edit source]
     #MM contrib-regina-module : setup linklibs includes contrib-regina-includes

This example says that in this makefile a contrib-regina-module is present that has to be build but the before building this metatarget first the metatargets setup, linklibs, ... has to be build; e.g. that the includes linklibs etc. have to be present before that this module can be build.

Example 2: metatarget consisting of submetatargets
[edit | edit source]
     #MM- contrib-freetype : contrib-freetype-linklib \
     #MM      contrib-freetype-graph \
     #MM      contrib-freetype-fonts \
     #MM      contrib-freetype-demos

Here actually is said that the contrib-freetype metatarget consists of building linklib, graph, fonts and demos of freetype. If some extra work needs to be done in the makefile where this metatarget the definition can start with '#MM ' and a normal make target 'contrib-freetype' has to be present in the makefile.

Also the use of the line continuation for the metatarget definition is shown here.

Example 3: Quick building of a target
[edit | edit source]
     #MM workbench-utilities : includes linklibs setup-clock-catalogs
     #MM
     workbench-utilities-quick : workbench-utilities

When a user executes MetaMake with workbench-utilities as an argument, make will be called in all the directories where the metaprerequisites are present in the makefile. This can become quite annoying when debugging programs. When now the second metatarget workbench-utilities-quick is defined as shown above only that target will be build in this directory. Of course the user has then to be sure that the metatargets on which workbench-utilities depend are up-to-date.

Usage and configuration files
[edit | edit source]

Usage: mmake [options] [metatargets]

To build mmake, just compile mmake.c. It doesn't need any other files.

mmake looks for a config file mmake.config or .mmake.config in the current directory for a file in the environment variable $MMAKE_CONFIG or a file .mmake.config in the directory $HOME.

This file can contain the following things:

#
This must be the first character in a line and begins a comment.
Comments are completely ignored my mmake (as are empty lines).
text="[<name>]"
This begins a config section for the project name. You can build
targets for this project by saying name.target.
maketool <tool options...>
Specifies the name of the tool to build a target. The default is
make "TOP=$(TOP)" "CURDIR=$(CURDIR)".
top <dir>
Specifies the root directory for a project. You will later find
this config option in the variable $(TOP). The default is the
current directory.
defaultmakefilename <filename>
Specifies the basename for makefiles in your project. Basename means
that mmake will consider other files which have this stem and an
extension, too. See the items to generate makefiles for details.
The default is Makefile.
defaulttarget <target>
The name of the default target which mmake will try to make if you
call it with the name of the project alone. The default is all.
genmakefilescript <cmdline...>
mmake will check for files with the basename as specified in
defaultmakefilename with the extension .src. If such a file is found,
the following conditions are checked: Whether this file is newer than
the makefile, whether the makefile doesn't exist and whether the file
genmakefiledeps is newer than the makefile. If any of these is true,
mmake will call this script the name of the source file as an extra
option and the stdout of this script will be redirected to
defaultmakefilename. If this is missing, mmake will not try to
regenerate makefiles.
genmakefiledeps <path>
This is the name of a file which is considered when mmake tries to
decide whether a makefile must be regenerated. Currently, only one
such file can be specified.
globalvarfile <path>
This is a file which contains more variables in the normal make(1)
syntax. mmake doesn't know about any special things like line
continuation, so be careful not to use such variables later (but
they don't do any harm if they exist in the file. You should just
not use them anywhere in mmake).
add <path>
Adds a nonstandard makefile to the list of makefiles for this
project. mmake will apply the standard rules to it as if the
defaultmakefilename was like this filename.
ignoredir <path>
Will tell mmake to ignore directories with this name. Try ignore
CVS if you use CVS to manage your projects' sources.
Any option which is not recognised will be added to the list of known variables (ie. foo bar will create a variable $(foo) which is expanded to bar).

Example

Here is an example:

      # This is a comment
      # Options before the first [name] are defaults. Use them for global
      # defaults
      defaultoption value

      # Special options for the project name. You can build targets for this
      # project with "mmake name.target"
      [AROS]

      # The root dir of the project. This can be accessed as $(TOP) in every
      # makefile or when you have to specify a path in mmake. The default is
      # the current directory
      top /home/digulla/AROS

      # This is the default name for Makefiles. The default is "Makefile"
      defaultmakefilename makefile

      # If you just say "mmake AROS", then mmake will go for this target
      defaulttarget AROS

      # mmake allows to generate makefiles with a script. The makefile
      # will be regenerated if it doesn't exist, if the source file is
      # newer or if the file specified with genmakefiledeps is newer.
      # The name of the source file is generated by concatenating
      # defaultmakefilename and ".src"
      genmakefilescript gawk -f $(TOP)/scripts/genmf.gawk --assign "TOP=$(TOP)"

      # If this file is newer than the makefile, the script
      # genmakefilescript will be executed.
      genmakefiledeps $(TOP)/scripts/genmf.gawk

      # mmake will read this file and every variable in this file will
      # be available everywhere where you can use a variable.
      globalvarfile $(TOP)/config/host.cfg

      # Some makefiles must have a different name than
      # defaultmakefilename. You can add them manually here.
      #add compiler/include/makefile
      #add makefile

A metatarget look like so: project.target. Example: AROS.setup. If nothing is specified, mmake will make the default target of the first project in the config file. If the project is specified but no target, mmake will make the default target of this project.

GenMF

[edit | edit source]
Introduction
[edit | edit source]

Genmf uses two files for generating a mmakefile. First is the macro definition file and finally the source mmakefile where these macro's can be used.

     * This syntax example assumes you have AROS' sources (either from SVN or downloaded
       from the homesite).  Assuming 'genmf.py' is found in your $PATH and that $AROSDIR
       points to location of AROS' sources root (e.g. /home/projects/AROS or alike).
           [user@localhost]# genmf.py $AROSDIR/config/make.tmpl mmakefile.src mmakefile
       This creates a mmakefile from the mmakefile.src in the current directory.

In general the % character is used as the special character for genmf source makefiles.

After ./configure i run the make command and that halts with an error from within the genmf.py script that is cannot find some file. the files that are fed to the genmf.py script seem to be lines in the /tmp/genmfxxxx file. the problem is that the lines are not created right. so when the lines are fed to the genmf.py script it cannot handle it.

Metamake creates tmpfiles:

./cache.c:    strcpy(tmpname, "/tmp/genmfXXXXXX");

Metamake actually calls genmf.py to generate the genmf file. It is located in bin/$(arch)-$(cpu)/tools

MetaMake uses time stamps to find out if a mmakefile has changed and needs to be reparsed. For mmakefiles with dynamic targets we would have to avoid that time stamp comparison.

This is I think only the case if the metarules would change depending on an external config file without that the mmakefile itself changes.

But this reminds me another feature I had in mind for mmake. I would make it possible to have real files as prerequisites of metatargets. This is to avoid that make is called unnecessary in directories. I would introduce a special character to indicate if a metatarget depends on a file, let's take @ and have the following rule

__MM__ ::
    echo bar : @foo

This would indicate that for this mmakefile metatarget 'bar' only has to be build if file foo changes. So if mmake wants to build metatarget 'bar' if would only call make if file foo in the same directory as the mmakefile has changed.

This feature would also be able to indicate if the metarules have to be rebuild, I would allocate the special __MM__ metatarget for it. By default always the implicit metarule would be there:

__MM__ ::
    echo __MM__ : @mmakefile

But people could add config files is needed:

__MM__ ::
    echo __MM__ : @mmconffile

Does MetaMake really do variable substitution? Yes, have a look in the var.c file.

The generated mmakefile for Demos/Galaxy still has #MM- demo-galaxy : demo-galaxy-$(AROS_TARGET_CPU) and I think the substitution is done later by Gnu/Make.

No, for gmake it is just a comment line; it does not know anything about mmake. And it also the opposite case; mmake does not know anything about gmake it just all the lines starting with #MM. So the next thing does not what you think it does in a gmake file:

ifeq ($(target), )
#MM includes : includes-here
else
#MM $(target) : includes-here
endif

mmake will see both lines as just ignores the if statement ! It will complain if it does not know target. That is one of the main reasons I proposed the above feature.

The main feature of mmake is that is allows for modular directory structure you can add or delete directories in the build tree and metamake will automatically update the metarules and the build itself to the new situation. For example it would allow to checkout only a few subdirectories of the ports directory if one wants to work on one of the programs there.

Macro definition
[edit | edit source]

A macro definition has the following syntax:

     %define macroname option1[=[default][\A][\M]] option2[=[default][\A][\M]] ...
     ...
     %end

macroname is the name of the macro. option1, option2, ... are the arguments for the macro. These options can be used in the body of this template by typing %(option1). This will be replaced be the value of option1.

The macro can be followed by a default value. If no default value is specified an empty string is taken. Normally no space are allowed in the default value of an argument. If this is needed this can be done by surrounding the value with double quotes (").

Also two switches can be given:

     \A
           Is the switch to always need a value for this. When the macro is
           instantiated always a value need to be assigned to this argument.
     \M
           Is the switch to turn on multi words. This means that all the words
           following this argument will be assigned to this argument. This also
           means that after the use of such an argument no other argument can be
           present because it will become part of this argument.
Macro instantiation
[edit | edit source]

The instantiation of the macro is done by using the '%' character followed by the name of the macro to instantiate (without a round brackets around it):

     %macro_name [option1=]value [option2=]value

Two ways are possible to specify value for arguments to a macro:

     value
           This will assign the value to the argument defined as the first argument
           to this macro. The time this format is used it will be assigned to the
           second argument and so on.
     option1=value
           This will assign the given value to the option with the specified name.

When giving values to arguments also double quotes need to be used if one wants to include spaces in the values of the arguments.

Macro instantiation may be used inside the body of a macro, even macro's that will only be defined later on in the macro definition file. Examples

FIXME (whole rules to be shown as well as action to be used in make rules)

AROS Build-System usage

[edit | edit source]

AROS Build-System configuration

[edit | edit source]

Before the build-system can be invoked via make – you will need to run "./configure" to set up the environment for your chosen target platform

i.e.

./configure --target=pc-i386

This causes the configure script to perform the following operations ...

AROS MetaMake configuration file

[edit | edit source]

[add the default settings for mmake]

Default AROS MetaMake MetaTargets

[edit | edit source]

AROS uses a set of base metatargets to perform all the steps needed to build the tools and components not only used to compile aros but also that make up aros itself

AROS Build MetaMake MetaTargets
[edit | edit source]
 AROS.AROS
 AROS.contrib
 AROS.development
 AROS.bootiso

[list standard metatargets used during the build process]

Special AROS MetaMake MetaTargets
[edit | edit source]
 ************ denotes a Real MetaTarget
 ************-setup
 ************-includes

Default AROS mmakefile Variables

[edit | edit source]

The following variables are defined for use in mmakefile's.

 //System related variables
   $(ARCH)
   $(AROS_HOST_ARCH)
   $(AROS_HOST_CPU)
   $(AROS_TARGET_ARCH)
   $(AROS_TARGET_CPU)
   $(AROS_TARGET_SUFFIX) / $(AROS_TARGET_VARIANT)
 //Arch specific variables
   $(AROS_TARGET_BOOTLOADER)
 //Directory related variables
   $(TOP)
   $(CURDIR)
   $(HOSTDIR)
   $(TOOLDIR)
   $(PORTSDIR)
   $(TARGETDIR)
   $(GENDIR)
   $(OBJDIR)
   $(BINDIR)
   $(EXEDIR)
   $(LIBDIR)
   $(OSGENDIR)
   $(KOBJSDIR)
   $(AROSDIR)
   $(AROS_C)
   $(AROS_CLASSES)
   $(AROS_DATATYPES)
   $(AROS_GADGETS)
   $(AROS_DEVS)
   $(AROS_FS)
   $(AROS_RESOURCES)
   $(AROS_DRIVERS)
   $(AROS_LIBS)
   $(AROS_LOCALE)
   $(AROS_CATALOGS)
   $(AROS_HELP)
   $(AROS_PREFS)
   $(AROS_ENVARC)
   $(AROS_S)
   $(AROS_SYSTEM)
   $(AROS_TOOLS)
   $(AROS_UTILITIES)
   $(CONTRIBDIR)

AROS mmakefile.src High-Level Macros

[edit | edit source]

Note : In the definition of the genmf rules sometimes mmake variables are used as default variables for an argument (e.g. dflags=%(cflags)). This is not really possible in the definition file but is done by using text that has the same effect.

Building programs

There are two macro's for building programs. One macro %build_progs that will compile every input file to a separate executable and one macro %build_prog that will compile and link all the input files into one executable.

%build_progs
[edit | edit source]

This macro will compile and link every input file to a separate executable and has the following definition:

%define build_progs mmake=/A files=/A \
      objdir=$(GENDIR)/$(CURDIR) targetdir=$(AROSDIR)/$(CURDIR) \
      cflags=$(CFLAGS) dflags=$(BD_CFLAGS$(BDID)) ldflags=$(LDFLAGS) \
      uselibs= usehostlibs= usestartup=yes detach=no

With the following arguments:

mmake=/A
This is the name of the metatarget that will build the programs.
files=/A
The basenames of the C source files that will be compiled and
linked to executables. For every name present in this list an
executable with the same name will be generated.
objdir=$(GENDIR)/$(CURDIR)
The directory where the compiled object files will be put.
targetdir=$(AROSDIR)/$(CURDIR)
The directory where the executables will be placed.
cflags=$(CFLAGS)
The flags to add when compiling the .c files. By default the
standard AROS cflags (the $(CFLAGS) make variables are taken.
This also means that some flags can be added by assigning these
to the USER_CFLAGS and USER_INCLUDES make variables before
using this macro.
dflags=%(cflags)
The flags to add when doing the dependency check. Default is
the same as the cflags.
ldflags=$(LDFLAGS)
The flags to use when linking the executables. By default the
standard AROS link flags will be used.
uselibs=
A list of static libraries to add when linking the executables.
This is the name of the library without the lib prefix or the .a
suffix and without the -l prefix for the use in the flags
for the C compiler.
By default no libraries are used when linking the executables.
usehostlibs=
A list of static libraries of the host to add when linking the
executables. This is the name of the library without the lib prefix
or the .a suffix and without the -l prefix for the use in the flags
for the C compiler.
By default no libraries are used when linking the executables.
usestartup=yes
Use the standard startup code for the executables. By default this
is yes and this is also what one wants most of the time. Only disable
this if you know what you are doing.
detach=no
Wether the executables will run detached. Defaults to no.
%build_prog
[edit | edit source]
seems that the %build_prog macros is currently alway producing stripped binaries, even in debug build. To workaround this problem, I need to define TARGET_STRIP in the following way:

TARGET_STRIP := $(STRIP)

%build_prog mmake="egltest" progname="egltest" files="$(EGL_SOURCES) peglgears" uselibs="GL galliumauxiliary"

Can someone with enough knowledge please fix the macro so that it produces unstripped binaries for debug builds again

This macro will compile and link the input files to an executable and has the following definition:

     %define build_prog mmake=/A progname=/A files=%(progname) asmfiles= \
           objdir=$(GENDIR)/$(CURDIR) targetdir=$(AROSDIR)/$(CURDIR) \
           cflags=$(CFLAGS) dflags=$(BD_CFLAGS$(BDID)) ldflags=$(LDFLAGS) \
           aflags=$(AFLAFS) uselibs= usehostlibs= usestartup=yes detach=no

With the following arguments:

     mmake=/A
           This is the name of the metatarget that will build the program.
     progname=/A
           The name of the executable.
     files=
           The basenames of the C source files that will be compiled and linked
           into the executable. By default just the name of the executable
           is taken.
     asmfiles=
           The assembler files to assemble and include in the executable. By
           default no asm files are included in the executable.
     objdir=$(GENDIR)/$(CURDIR)
           The directory where the compiled object files will be put.
     targetdir=$(AROSDIR)/$(CURDIR)
           The directory where the executables will be placed.
     cflags=$(CFLAGS)
           The flags to add when compiling the .c files. By default the standard
           AROS cflags (the $(CFLAGS) make variable) are taken. This also means
           that some flags can be added by assigning these to the USER_CFLAGS
           and USER_INCLUDES make variables before using this macro.
     dflags=%(cflags)
           The flags to add when doing the dependency check. Default is the
           same as the cflags.
     aflags=$(AFLAGS)
           The flags to add when compiling the asm files. By default the standard
           AROS aflags (e.g. $(AFLAGS)) are taken. This also means that some
           flags can be added by assigning these to the SPECIAL_AFLAGS make
           variable before using this macro.
     ldflags=$(LDFLAGS)
           The flags to use when linking the executable. By default the
           standard AROS link flags will be used.
     uselibs=
           A list of static libraries to add when linking the executable. This
           is the name of the library without the lib prefix or the .a suffix
           and without the -l prefix for the use in the flags for the C compiler.
           By default no libraries are used when linking the executable.
     usehostlibs=
           A list of static libraries of the host to add when linking the
           executable. This is the name of the library without the lib prefix
           or the .a suffix and without the -l prefix for the use in the flags
           for the C compiler.
           By default no libraries are used when linking the executable.
     usestartup=yes
           Use the standard startup code for the executables. By default this
           is yes and this is also what one wants most of the time. Only disable
           this if you know what you are doing.
     detach=no
           Wether the executable will run detached. Defaults to no.
%build_linklib
[edit | edit source]

Building static linklibraries

Building link libraries is straight forward. A list of files will be compiled or assembled and collected in a link library into a specified target directory.

The definition of the macro is as follows:

     %define build_linklib mmake=/A libname=/A files="$(basename $(wildcard *.c)) \
           asmfiles= cflags=$(CFLAGS) dflags=%(cflags) aflags=$(AFLAGS) \
           objdir=$(OBJDIR) libdir=$(LIBDIR)

With the meaning of the arguments as follows:

     mmake=/A
           This is the name of the metatarget that will build the linklib.
     libname=/A
           The base name of the library to generate. The file that will be
           generated will be called lib%(libname).a
     files=$(basename $(wildcard *.c))
           The C files to compile and include in the library. By default all
           the files ending in .c in the source directory will be used.
     asmfiles=
           The assembler files to assemble and include in the library. By
           default no asm files are included in the library.
     cflags=$(CFLAGS)
           The flags to use when compiling the .c files. By default the
           standard AROS cflags (e.g. $(CFLAGS)) are taken. This also means
           that some flags can be added by assigning these to the USER_CFLAGS
           and USER_INCLUDES make variables before using this macro.
     dflags=%(cflags)
           The flags to add when doing the dependency check. Default is the
           same as the cflags.
     aflags=$(AFLAGS)
           The flags to add when compiling the asm files. By default the standard
           AROS aflags (e.g. $(AFLAGS)) are taken. This also means that some
           flags can be added by assigning these to the SPECIAL_AFLAGS make
           variable before using this macro.
     objdir=$(OBJDIR)
           The directory where to generate all the intermediate files. The
           default value is $(OBJDIR) which in itself is by default equal to
           $(GENDIR)/$(CURDIR).
     libdir=$(LIBDIR)
           The directory to put the library in. By default the standard lib
           directory $(LIBDIR) will be used.
%build_module
[edit | edit source]

Building modules consists of two parts. First is a macro to use in mmakefile.src files. Another is a configuration file that describes the contents of the module.

The mmakefile.src macro
[edit | edit source]

This is the definition header of the build_module macro:

     %define build_module mmake=/A modname=/A modtype=/A            \
           conffile=%(modname).conf files="$(basename $(wildcard *.c))" \
           cflags=$(CFLAGS) dflags=%(cflags) objdir=$(OBJDIR)           \
           linklibname=%(modname) uselibs=

Here is a list of the arguments for this macro:

     mmake=/A
           This is the name of the metatarget that will build the module.
           Also a %(mmake)-quick and %(mmake)-clean metatarget will be defined.
     modname=/A
           This is the name of the module without the suffix.
     modtype=/A
           This is the type of the module and corresponds with the suffix of
           the module. At the moment only library, mcc, mui and mcp are
           supported. Support for other modules is planned in the future.
     conffile=%(modname).conf
           The name of the configuration file. Default is modname.conf.
     files="$(basename $(wildcard *.c))"
           A list of all the C source files without the .c suffix that contain
           the code for this module. By default all the .c files in the current
           directory will be taken.
     cflags=$(CFLAGS)
           The flags to use when compiling the .c files. By default the
           standard AROS cflags (e.g. $(CFLAGS)) are taken. This also means
           that some flags can be added by assigning these to the USER_CFLAGS
           and USER_INCLUDES make variables before using this macro.
     dflags=%(cflags)
           The flags to add when doing the dependency check. Default is the
           same as the cflags.
     objdir=$(OBJDIR)
           The directory where to generate all the intermediate files. The
           default value is $(OBJDIR) which in itself is by default equal
           to $(GENDIR)/$(CURDIR).
     linklibname=%(modname)
           The name to be used for the static link library that contains
           the library autoinit code and the stubs converting C stack calling
           convention to a call off the function from the library functable
           with the appropriate calling mechanism. These stubs are normally
           not needed when the AROS defines for module functions are not disabled.
           There will always be a file generated with the name
                 $(LIBDIR)/lib%(linklibname).a
           .. and by default linklibname will be the same as modname.
     uselibs=
           A list of static libraries to add when linking the module. This is
           the name of the library without the lib prefix or the .a suffix
           and without the -l prefix for the use in the flags for the C compiler.
           By default no libraries are used when linking the module.
The module configuration file
[edit | edit source]

The module configuration file is subdived in several sections. A section is defined with the following lines:

     ## begin sectionname
     ...
     ## end sectionname

The interpretation of the lines between the ##begin and ##end statement is different for every section. The following sections are defined:

     * config
           The lines in this section have all the same format:
                 optionname string
           with the string starting from the first non white space after
           optionname to the last non white space character on that line.
           A list of all the options available:
           basename
                 Followed by the base name for this module. This will be used
                 as a prefix for a lot of symbols. By default the modname
                 specified in the makefile is taken with the first letter
                 capitalized.
           libbase
                 The name of the variable to the library base in. By default
                 the basename will be taken with Base added to the end.
           libbasetype
                 The type to use for the libbase for use internally for the
                 library code. E.g. the sizeof operator applied to this type
                 has to yield the real size of the object. Be aware that it
                 may not be specified as a pointer. By default
                 'struct LibHeader' is taken.
           libbasetypeextern
                 The type to use for the libbase for code using the library
                 externally. By default 'struct Library' is taken.
           version
                 The version to compile into the module. This has to be
                 specified as major.minor. By default 0.0 will be used.
           date
                 The date that this library was made. This has to have the
                 format of DD.MM.YYYY. As a default 00.00.0000 is taken.
           libcall
                 The argument passing mechanism used for the functions in
                 this module. It can be either 'stack' or 'register'. By
                 default 'stack' will be used.
           forcebase
                 This will force the use of a certain base variable in the
                 static link library for auto opening the module. Thus it
                 is only valid for module that support auto opening. This
                 option can be present more than once in the config section
                 and then all these base will be in the link library. By default
                 no base variable will be present in the link library.
     * cdef
           In this section all the C code has to be written that will declare
           all the type of the arguments of the function listed in the
           function. All valid C code is possible including the use of #include.
     * functionlist
           In this section all the functions externally accessible by programs.
           For stack based argument passing only a list of the functions has to
           be given. For register based argument passing the names of the
           register have to be given between rounded brackets. If you have
           function foo with the first argument in D0 and the second argument
           in A0 it gives the following line in in the list:
                 foo(D0,A0)
%build_module_macro
[edit | edit source]

Building modules (the legacy way)

Before the %build_module macro was developed already a lot of code was written. There a mixture of macro's was usedin the mmakefile and they were quite complicated. To clean up these mmakefiles without needing to rewrite too much of the code itself a second genmf macro was created to build modules that were written using the older methodology. This macro is called build_module_macro. For writing new modules people should consider this macro as deprecated and only use this macro when the %build_module doesn't support the module yet they want to create.

The mmakefile.src macro
[edit | edit source]

This is the definition header of the build_module_macro macro:

     %define build_module_macro mmake=/A modname=/A modtype=/A \
           conffile=%(modname).conf initfile=%(modname)_init \
           funcs= files= linklibfiles= cflags=$(CFLAGS) dflags=%(cflags) \
           objdir=$(OBJDIR) linklibname=%(modname) uselibs= usehostlibs= \
           genfunctable= genincludes= compiler=target

Here is a list of the arguments for this macro:

     mmake=/A
           This is the name of the metatarget that will build the module.
           It will define that metatarget but won't include any metaprerequisites.
           If you need these you can add by yourself with an extra
           #MM metatargets : ... line. Also a %(mmake)-quick and
           %(mmake)-clean metatarget will be defined.
     modname=/A
           This is the name of the module without the suffix.
     modtype=/A
           This is the type of the module and corresponds with the suffix
           of the module. It can be one of the following : library gadget
           datatype handler device resource mui mcc hidd.
     conffile=%(modname).conf
           The name of the configuration file. Default is modname.conf.
     funcs=
           A list of all the source files with the .c suffix that contain the
           code for the function of a module. Only one function per C file
           is allowed and the function has to be defined using the
           AROS_LHA macro's.
     files=
           A list of all the extra files with the .c suffix that contain the
           extra code for this module.
     initfile=%(modname)_init
           The file with the init code of the module.
     cflags=$(CFLAGS)
           The flags to add when compiling the .c files. By default the
           standard AROS cflags (the $(CFLAGS) make variables are taken.
           This also means that some flags can be added by assigning these
           to the USER_CFLAGS and USER_INCLUDES make variables before using
           this macro.
     dflags=%(cflags)
           The flags to add when doing the dependency check. Default is the
           same as the cflags.
     objdir=$(OBJDIR)
           The directory where to generate all the intermediate files. The
           default value is $(OBJDIR) which in itself is by default equal
           to $(GENDIR)/$(CURDIR).
     linklibname=%(modname)
           The name to be used for the static link library that contains the
           library autoinit code and the stubs converting C stack calling
           convention to a call off the function from the library functable
           with the appropriate calling mechanism. These stubs are normally
           not needed when the AROS defines for module function are not disabled.
           There will always be a file generated with the name :
                 $(LIBDIR)/lib%(linklibname).a
           ... and by default linklibname will be the same as modname.
     uselibs=
           A list of static libraries to add when linking the module. This
           is the name of the library without the lib prefix or the .a suffix
           and without the -l prefix for the use in the flags for the C compiler.
           By default no libraries are used when linking the module.
     usehostlibs=
           A list of static libraries of the host to add when linking the module.
           This is the name of the library without the lib prefix or the .a
           suffix and without the -l prefix for the use in the flags for the
           C compiler.
           By default no libraries are used when linking the module.
     genfunctable=
           Bool that has to have a value of yes or no or left empty. This
           indicates if the functable needs to be generated. If empty the
           functable will only be generated when funcs is not empty.
     genincludes=
           Bool that has to have a value of yes or no or left empty. This
           indicates if the includes needs to be generated. If empty the
           includes will only be generated for a library, a gadget or a device.
     compiler=target
           Indicates which compiler to use during compilation. Can be either
           target or host to use the target compiler or the host compiler.
           By default the target compiler is used.
The module configuration file
[edit | edit source]

For the build_module_macro two files are used. First is the module configuration file (modname.conf or lib.conf) and second is the headers.tmpl file.

The modules config file is file with a number of lines with the following syntax:

     name <string>
           Init the various fields with reasonable defaults. If <string> is
           XXX, then this is the result:
                 libname         xxx
                 basename        Xxx
                 libbase         XxxBase
                 libbasetype     XxxBase
                 libbasetypeptr  XxxBase *
           Variables will only be changed if they have not yet
           been specified.
     libname <string>
           Set libname to <string>. This is the name of the library
           (i.e. you can open it with <string>.library). It will show up
           in the version string, too.
     basename <string>
           Set basename to <string>. The basename is used in the AROS-LHx
           macros in the location part (last parameter) and to specify defaults
           for libbase and libbasetype in case they have no value yet. If
           <string> is xXx, then libbase will become xXxBase and libbasetype
           will become xXxBase.
     libbase <string>
           Defines the name of the library base (i.e. SysBase, DOSBase,
           IconBase, etc.). If libbasetype is not set, then it is set
           to <string>, too.
     libbasetype <string>
           The type of libbase (with struct), i.e. struct ExecBase,
           struct DosLibrary, struct IconBase, etc.).
     libbasetypeptr <string>
           Type of a pointer to the libbase. (e.g. struct ExecBase *).
     version <version>.<revision>
           Specifies the version and revision of the library. 41.0103
           means version 41 and revision 103.
     copyright <string>
           Copyright string.
     define <string>
           The define to use to protect the resulting file against double
           inclusion (i.e. #ifndef <string>...). The default is _LIBDEFS_H.
     type <string>
           What kind of library is this ? Valid values for <string>
           are: device, library, resource and hidd.
     option <string>...
           Specify an option. Valid values for <string> are:
           o noexpunge
                 Once the lib/dev is loaded, it can't be removed from
                 memory. Be careful with this option.
           o rom
                 For ROM based libraries. Implies noexpunge and unique.
           o unique
                 Generate unique names for all external symbols.
           o nolibheader
                 We don't want to use the LibHeader prefixed functions
                 in the function table.
           o hasrt
                 This library has resource tracking.
           You can specify more than one option in a config file and more
           than one option per option line. Separate options by space.
The header.tmpl file
[edit | edit source]

Contrary to the %build_module macro for %build_module_macro the C header information is not included in the configuration file but an additional files is used with the name headers.tmpl. This file has different section where each of the sections will be copied in a certain include file that is generated when the module is build. A section has a structure as follows:

     ##begin sectionname
     ...
     ##end sectionname

With sectionname one of the following choices:

     * defines
     * clib
     * proto
%build_archspecific
[edit | edit source]

Compiling arch and/or CPU specific files

In the previous paragraph the method was explained how a module can be build with the AROS genmf macro's. Sometimes one wants to replace certain files in a module with an implementation only valid for a certain arch or a certain CPU. The macro definition

Arch specific files are handled by the macro called %build_archspecific and it has the following header:

     %define build_archspecific mainmmake=/A maindir=/A arch=/A files= asmfiles= \
           cflags=$(CFLAGS) dflags=%(cflags) aflags=$(AFLAGS) compiler=target

And the explanation of the argument to this macro:

     mainmmake=/A
           The mmake of the module from which one wants to replace files
           or to wich to add additional files.
     maindir=/A
           The directory where the object files of the main module are stored.
           The is only the path relative to $(GENDIR). Most of the time this
           is the directory where the source files of the module are stored.
     arch=/A
           The architecture for which these files needs to be build. It can
           have three different forms ARCH-CPU, ARCH or CPU. For example when
           linux-i386 is specified these files will only be build for the
           linux port on i386. With ppc it will be build for all ppc processors
           and with linux it will be build for all linux ports.
     files=
           The basenames of the C source files to replace add to the module.
     asmfiles=
           The basenames of the asm source files to replace or add to the module.
     cflags=$(CFLAGS)
           The flags to add when compiling the .c files. By default the standard
           AROS cflags (the $(CFLAGS) make variables are taken. This also means
           that some flags can be added by assigning these to the USER_CFLAGS
           and USER_INCLUDES make variables before using this macro.
     dflags=%(cflags)
           The flags to add when doing the dependency check. Default is the
           same as the cflags.
     aflags=$(AFLAGS)
           The flags to add when assembling the asm files. By default the
           standard AROS cflags (the $(AFLAGS) make variable) are taken. This
           also means that some flags can be added by assigning these to the
           SPECIAL_AFLAGS make variable before using this macro.
     compiler=target
           Indicates which compiler to use during compiling C source files.
           Can be either target or host to use the target compiler or the
           host compiler. By default the target compiler is used.
%rule_archalias
[edit | edit source]

Code shared by different ports

A second macro called %rule_archalias allows to create a virtual architecture. And code for that virtual architecture is shared between several architectures. Most likely this is used for code that uses an API that is shared between several architecture but not all of them.

The macro has the following header:

%define rule_archalias mainmmake=/A arch=/A alias=/A

With the following arguments

mainmmake=/A
The mmake of the module from which one wants to replace files, or
which to add additional files to.
arch=/A
The arch one wants to make alias from.
alias=/A
The arch one wants to alias to.

Examples

1. This is an extract from the file config/linex/exec/mmakefile.src that replaces the main init.c file from exec with a linux specialized one:

%build_archspecific \
      mainmmake=kernel-exec maindir=rom/exec arch=linux \
      files=init compiler=host

2. For the dos.library some arch specific files are grouped together in the unix arch. The following lines are present in the several mmakefiles to make this possible

In config/linux/mmakefile.src:

%rule_archalias mainmmake=kernel-dos arch=linux alias=unix

In config/freebsd/mmakefile.src:

%rule_archalias mainmmake=kernel-dos arch=freebsd alias=unix

And finally in config/unix/dos/mmakefile.src:

%build_archspecific \
            mainmmake=kernel-dos maindir=rom/dos \
            arch=unix \
            files=boot \
            compiler=host

AROS mmakefile.src Low-Level Macros

[edit | edit source]

Libraries

[edit | edit source]

A simple library that uses a custom suffix (.wxt), and returns TRUE in its init function, however the Open code never gets called – and openlibrary fails? (the init function does get called though..) With a conf file with no ##functionlist section I get the error: In readref: Could not open (null)

Genmodule tries to read a ref file when no ##functionlist section is available. After adding a dummy function to the conf file it worked for me. Take care: haven't added any flags which avoids creating of header files and such. How to deal with library base pointers in plug-ins when you call library functions.

use only one function -> called to make the "plugin" register all its hooks with wanderer. Iterate through the plugin directory, and for each file ending ".wxt", create an internal plugin structure in which i store the pointer to the libbase of the OpenLibrary'd plugin. After enumerating the plugins, iterate the list of plugin structs and call the single library function which causes them to all register with wanderer. had been using some of the struct library fields (lib_Node.ln_Name was the culprit).

We should remove the dos.c, intuition.c, etc. files with hardcoded version numbers from autoinit and replace them with -ldos -lintuition inside gcc specs file. This would avoid starting programs on older versions of libraries. If an older version suffice some __xxx_version global can be defined in the program code to enable this. We could also provide based on the info you described below exec_v33 exec_v45 link libraries that would also make sure no function of a newer version is used. A very clean solution to get the desired effect.

-noarosc mentions checking the spec file to find out about it but there is nothing in the specs file related. This was added to disabled automatic linking of arosc to all libraries. It was used in the build_library macro – check V0. Automatic linking of arosc.library which had per task context to other libraries which had global context was a very bad thing. "C standard library" objects belonging to global context library were allocated on opening task context. When the task exited and global context library not, global context library was using "freed" memory.

A note to any of you wanting to upgrade to Ubuntu 12.10, or any distribution that uses gcc 4.7. There is an issue (bug? misfeature?) in gcc 4.7 where the '-specs /path/to/spec/override' is processed *after* gcc checks that it has been passed valid arguments. This causes gcc to fail with the error:

gcc-4.7: error: unrecognized command line option "-noarosc"

when it is used to link programs for the x86 and x86_64 targets if you are using the native host's compiler (for example, when compiling for linux-x86_64 hosted). Please use gcc-4.6 ("export CC=gcc-4.6") for hosted builds until further notice (still valid as of March 2013).

Per task

[edit | edit source]

There are other things for which arosc.library needs to be per task based: autoclosing of open files and autofreeing of malloced memory when a programs exits; a per task errno and environ variable that can be changed by calling library functions.

regina.library does also do that by linking with arosc_rel. It needs some more documentation to make it usable by other people. You can grep aroscbase inside the regina source code to see where it is used. regina.library and arosc.library are per task libraries. Each time regina.library is opened it also opens arosc.library and it then gets the same libbase as the program that uses regina.library.

By linking with arosc_rel and defining aroscbase_offset arosc.library functions called from regina.library will be called with the arosc libbase stored in it's own libbase, and the latter is different for each task that has opened regina.library.

The AROS_IMPORT_ASM_SYM of aroscbase in the startup section of regina.conf assures that the arosc.library init functions are called even if the programs that uses regina.library does not use an arosc.library function itself and normally would not auto-open it.

Problem is that both bz2 and z library use stdio functions. The arosc.library uses the POSIX file descriptors which are of type int to refer to files. The same file descriptor will point to different files in different tasks. That's why arosc.library is a pertask library. FILE * pointer internally have a file descriptor stored that then links to the file.

Now bz2 and z are using also stdio functions and thus also they need a different view for the file descriptors depending in which program the functions are called from. That's why bz2 and z become also pertask libraries.

it breaks POSIX compatibility to use a type other than int for file descriptors. Would a better solution be to assign a globally unique int to each file descriptor, and thus avoid the need to make arosc.library a per-task library? far simpler solution – all DOS FileHandles and FileLocks are allocated from MEMF_31BIT. Then, we can be assured that their BPTRs fit into an int.

int open(const char *path, int flags, int mode)
{
   BPTR ret;
   ULONG rw = ((flags & O_READ) && !(flags & O_WRITE)) ? MODE_OLDFILE
: MODE_NEWFILE;
   ret = Open(path, rw);
   if (ret == BNULL) {
       IoErr_to_errno(IoErr());
       return -1;
   }
   return (int)(uintptr_t)ret;

}

void close(int fd)
{
  Close((BPTR)(uintptr_t)fd);
}

static inline BPTR ftob(int fd)
{
   return (fd == 0) ? Input() :
          (fd == 1) ? Output() :
          (fd == 2) ? ErrorOutput() :
          (fd < 0) ? BNULL :
          ((BPTR)(uintptr_t)fd);
}

int read(int fd, void *buff, size_t len)
{
  int ret;
  ret = Read(ftob(fd), buff, len);
  if (ret < 0)
    IoErr_to_errno(IoErr());
  return ret;
}

you will most likely kill the 64-bit darwin hosted target. AFAIR it has 0 (zero) bytes of memf_31bit memory available.

Must modules which are using pertask libraries be implemented itself as pertask library? Is it a bug or feature that I get now the error about missing symbolsets handling. You will now see more verbose errors for missing symbol sets, for example:

Undefined symbol: __LIBS__symbol_set_handler_missing
Undefined symbol: __CTORS__symbol_set_handler_missing

By linking with jpeg and arosc, instead of jpeg_rel and arosc_rel, it was pulling in the PROGRAM_ENTRIES symbolset for arosc initialization. Since jpeg.datatype is a library, not a program, the PROGRAM_ENTRIES was not being called, and some expected initialization was therefore missing.

It is the ctype changes that is causing the problem. This code now uses ADD2INIT macro to add something to initialization of the library. As you don't handle these init set in your code it gives an error. You can for now use -larosc.static -larosc or implement init set handling yourself.

The move to ctype handling is that in the future we may want to have locale handling in the C library so toupper/tolower may be different for different locales. This was not possible with the ctype stuff in the link lib. Ideally in the source code sqlite3-aros.c whould be replaced with sqlite3.conf and genmodule would be called from makefile-new

  • strongly* recommend that you *not* use %build_module_simple for pertask/peropener libraries for now. There is a PILE of crap that genmodule needs to do *just*exactly*right* to get them to work, and that pile is still in flux at the moment.

Use %build_module, and add additional initialization with the ADD2*() family of macros.

If you insist on %build_module_simple, you will need to link explicitly with libautoinit.

To handle per-task stuff manually:

  LibInit:
     you will need to call AllocTaskStorageSlot() to get a task
storage slot,  save that in your global base
  LibExpunge:
     FreeTaskStorageSlot() the slot
  LibOpen:
     and use SetTaskStorageSlot() to put your task-specific data in
the task's slot
  LibClose:
     Set the task's storage slot to NULL

You can get the task-specific data in one of your routines, using GetTaskStorageSlot().

if you're not using the stackcall API, that's the general gist of it.

would recommend that you use the static libraries until the pertask/peropener features have stabilized a bit more. You can always go back to dynamic linking to pertask/peropen libs later.

You should be able to use arosc.library without needing to be pertask. Things gets more complicated if code in library uses file handles, malloc, errno or similar things.

Is the PROGRAM_ENTRIES symbolset correct for arosc initialization then, or should it be in the INIT set? If so move the arosc_startup.c to the INIT set.

Think about datatypes. Zune (muimaster.library) caches datatype objects. Task A may be the one triggering NewDtObject(). Task B may be the one triggering DisposeDTObject().

NewDTObject() does OpenLibrary of say png.datatype. DisposeDTObjec() does CloseLibrary of say png.datatype.

If png.datatype usees some pertask z.library that's a problem, isn't it? As png.datatype is not per opener and is linked with arosc there should only be a problem when png.datatype is expunged from memory not when opened or closed. It will also use the arosc.library context from the task that calls the Init function vector of png.datatype and it will only be closed when the Expunge vector is called.

relbase

[edit | edit source]
stackcall/peropener
  • library.conf: relbase FooBase -> rellib foo
  • rellib working for standard and peropener/pertask libraries
  • <proto/foo.h> automatically will use <proto/foo_rel.h> if 'rellib foo' is used in the libraries .conf
  • "uselibs" doesn't need to manually specify rellib libraries

arosc_rel.a is meant to be used from shared libraries not from normal programs. Auto-opening of it is also not finished, manual work is needed ATM.

z_au, png_au, bz2_au, jpeg_au, and expat_au now use the relbase subsytem. The manual init-aros.c stub is no longer needed. Currently, to use relative libraries in your module, you must:

  1. Enable 'options pertaskbase' in your library's .conf
  2. Add 'relbase FooBase' to your library's .conf for each relative library you need.
  3. Make sure to use the '<proto/foo_rel.h>' headers instead of '<proto/foo.h>'
  4. Link with 'uselibs=foo_rel'

can't find a valid way to implement peropener libraries with 'stack' functions without a real ELF dynamic linker (ie ld-linux.so). The inherent problem is determining the where the 'current' library base is when a stack function is called.

(In the following examples, assume stack.library is a peropener library, and StackFunc() is a stack style function in that library. plugin.library uses stack.library)

Example 1 – Other libraries doing weird things behind your back

  extern struct Library *StackBase;  /* set up by autoinit */

  void foo(void)
  {
      struct Library *PluginBase;

      StackFunc();  // Called with expected global StackBase
      PluginBase = OpenLibrary("plugin.library",0);
      /* Note that plugin.library did an OpenLibrary() of "stack.library".
       * In the current implementation, this now sets the taskslot of
       * stack.library to the new per-opener stack.library base.
       */
      StackFunc();  // Unexpectly called with the new stack.library base!!!
      CloseLibrary(PluginBase);
      /* The CloseLibrary() reset the taskslot to the old base */
      StackFunc();   // Back to the old base
}

Ok, to fix that issue, let's suppose we use a stub wrapper that sets the taskslot to the (global) library base. This was no problem with old implementation. StackBase was passed in the scratch register StackFunc() each time it was called. This base was then used.

Example 2 – Local vs Global bases

  extern struct Library *StackBase;  /* set up by autoinit */

  void bar(void)
  {
      StackBase(); // Called as expected

      {
        struct Library *StackBase = OpenLibrary("stack.library");

        StackBase(); // WTF! linklib wrapper used *global* StackBase, not local one!

        CloseLibrary(StackBase);
      }

      StackBase();  // Works as expected
  }

Hmm. Ok, that behavior is going to be a little weird to explain to developers. I don't see the need to support local bases.

Example 3 – Callback handlers

  extern struct Library *StackBase;  /* set up by autoinit */

  const struct handler {
    void (*dostack)(void);
  } handler = { .dostack = StackFunc };

  void bar(void)
  {
     /* Who knows what base this is called with?
      * It depends on a number of things, could be the global StackBase,
      * could be the most recently OpenLibrary()'d stack.library base.
      */
     handler->dostack();
  }

Function pointers to functions in a peropener library may be a problem but is it needed ?

All in all, until we have either

  1. a *real* ELF shared library subsystem

or

  1. real use case for peropener libraries. C split arosstdc.library is peropener base. Reasoning is that you

may sometimes want to do malloc in a library that is not freed when the Task that happened to call function is exiting. Let's say picture caching library that uses ported code which internally uses malloc. If you have a pertask library the malloc will allocate memory on the Task that currently is calling the library and this memory will disappear when this task quits (should do free() prior exit). Before your change the caching library could just link with libarosstdc.a (and not libarosstdc_rel.a) and it worked.

idea could be to either link(*) or unlink(**) the malloc to a given task, depending on from where it is called (within library or not). No, the whole point is to have the malloced memory _not_ to be attached to a task so a cached image can be used from different tasks even if the first task already died.

static

[edit | edit source]

call the static link library for the static version for the few shared libraries that need it different, like libz_static.a Ideally all code should just work with the shared library version.

Module          Link Library           uselibs=
----------      ------------           --------------
Foo.datatype => libfoo.datatype.a   => foo.datatype

Foo.library  => libfoo.a            => foo

foo (static) => libfoo.static.a     => foo.static

And the 'misc static libs' (libamiga.a libarossupport.a, etc)

libmisc.a    => libmisc.a           => misc

usestartup=no and the '-noarosc' LDFLAG both imply arosc.static (it doesn't hurt to link it, and if you really want arosc.library, that will preempt arosc.static)

Again this will make -lz not link with the shared library stubs. IMO uselibs=z should use shared library by default.

'uselibs="utility jpeg.datatype z.static arossupport"' method.

if there's a dynamic version of a library, it should always be used: static linking of libraries should be discouraged for all the usual reasons, e.g. the danger of embedding old bugs (not just security holes), bloat etc. Don't see the need for a -static option (or any other way to choose between static and dynamic libraries).

Makedepend

[edit | edit source]

AROS build system generates for each .c file a .d file where the includes are listed. The .c is recompiled when any of the includes changes. Remember that AROS is an OS in development so we often do/did changes to the core header files. If this makedepend was not done programs would not be rebuilt if changes are made to AROS libraries or other core code. OK, so it's basically creating the dependencies of the .o

mmakefile

[edit | edit source]

We do get an error from it, so something is in fact going wrong. But what is?

Probably a hacky mmakefile so that include file is not found during makedepend but is found during compilation or maybe a wrong dependency so it is not guaranteed that the include file is there during makedepend. And I do think it would be better if the build would stop when such an error occurs.

configuration files

[edit | edit source]

We are talking about configuration files for modules like this: rom/graphics/graphics.conf.

I have been thinking about similar things, but first I would like to convert our proprietary .conf format to xml. Manually writing file parsings is so passe :)

Uhh.. I have no objection to using a 'standard' parser, but I have to vote no on XML *in specific*.

JSON or YAML (summaries of both are on Wikipedia) are available would be better choices, since they are much more human readable, but semantically equivalent to XML.

I agree that xml is not the easiest format to edit in a text editor and is quite bloated. From the other side it has ubiquitous in scripting and programming language and in text editors and IDEs. I also like that the validity of a xml file can be checked through a schema file and that it also can be a guide for the editor. There are also tools to easily convert xml files based on this schema etc. It does not matter what format it is in but it should take as much coding away from the (genmodule) programmer.

Another improvement over XML could be the inclusion of literal code. Currently some literal code snippets are included in .conf file and in XML they would need some character encoding. How is this for JSON or YAML ?

YAML supports UniCode internally. I don't know how well that could be ported to AROS though since it seems AROS doesn't have UniCode support yet. JSON is based on JavaScript notation and YAML 1.2 can import JSON files as it implemented itself as a complete super-set of JSON. YAML's only 1.2 implementation is in C++ using CMake as a build script creator. If we use the C implementation of libYaml, it's only YAML 1.1 compliant and loses the backward compatibility to JSON.

Any data languages can be checked against a scheme; it's mostly a matter of writing out the schemes to check against. You can but my questions if the tools exists. From the second link you provided: "There are a couple of downsides to YAML: there are not a lot of tools available for it and it’s also not very easy to validate (I am not aware of anything similar to a DTD or a schema)". I find validation/syntax checking as important as human readability. Syntax checking is in the parsing in all four cases. The validation the XML can do is whether it conforms to the parsing and whether it conforms to a specific scheme. YAML and JSON are specifically intended for structured data, en I guess my example is too, so the equivalent XML scheme would check whether the content was correctly structured for structured data. The other three don't need that as anything they parse is by definition structured data.

All four have the same solution: They are all essentially tree builders, and you can walk the tree to see if each node conforms to your content scheme. The object is to use a defined schema/DTD for the files that are describing a library. Text editors that understand schemas can then let you only add fields that are valid by the schema. So this schema let everyone validate a XML if it is a valid XML library description file; they can use standard tools for that.

AFAICS JSON and YAML parsers only validate if the input file is a valid JSON/YAML file, not that it is a valid JSON/YAML library description file. AFAICS no such tools exist for these file formats.

ETask Task Storage

[edit | edit source]
__GM_* functions
  __GM_BaseSlot: externally visible slot ID (for the AROS_RELLIBFUNCSTUB() assembly routines)

  __GM_SetBase_Safe: Set (and reallocate) task storage

    Static function, only called in a library's InitLib() and OpenLib() code.

  __GM_GetBase_Safe: Get task storage slot

    This is the 'slow' version of __GM_GetBase(), which calls Exec/GetTaskStorageSlot(). Returns NULL if the
    slot does not exist or is unallocated.

  __GM_GetBase: Get task storage slot (unsafe)

    This is the 'fast' version of __GM_GetBase(), which does not need to perform any checking. This function
    is provided by the CPU specific AROS_GM_GETBASE() macro (if defined). The fallback is the same
    implementation as __GM_GetBase_Safe

  __AROS_GM_GETBASE: Fast assembly 'stub' for getting the relbase

    Designed to be used in the AROS_RELLIBFUNCSTUB() implementation.

    Does not do any sanity checking. Guaranteed to be run only if (a) InitLibrary() or OpenLibrary() has
    already been called in this ETask context or (b) this ETask is a child of a parent who has opened
    the slot's library.

    I can generate implementations of this for arm, m68k, and i386, but I want the location of TaskStorage
    to be agreed upon before I do that work and testing.

    AROS_GM_GETBASE(): Generates a C function wrapper around the fast stub.
Genmodule no longer has to have internal understanding of where the TaskStorage resides. All of that knowledge is now in exec.library and the arch/*-all/include/aros/cpu.h headers.

Location of the TaskStorage slots

It was important to me that the address of the ETask does not change. For example, it would be pretty bad if code like this broke:

    struct ETask *et = FindTask(NULL)->tc_UnionETask.tc_ETask;
    ...
    UnzipFile("foo.zip");  <= opens z_au.library, slots reallocated
    ..
    if (et->Parent) {   <= ARGH! et was freed!
       ....

Also, I wanted to minimize the number of places that need to be modified if the TaskStorage location needed to be moved (again).

et_TaskStorage is automatically resized by Exec/SetTaskStorageSlot() as needed, and a new ETask's et_TaskStorage is cloned from its parent, if the parent was also an ETask with et_TaskStorage. What I wanted to say here is that some overhead may be acceptable for SetTaskStorageSlot() if properly documented. E.g. to not call the function in time critical paths.

You clone Parent TaskStorage when creating a subtask as before. This may be acceptable if it is documented that a slot allocated in the parent may not be valid in child if it is allocated in parent after the child has been created. For other use cases think it acceptable to have to be sure that in a Task first a SetStorageSlot() has to be done before getting the value.

Auto generation of oop.library

[edit | edit source]

updated genmodule to be capable of generating interface headers from the foo.conf of a root class, and have tested it by updating graphics.hidd to use the autogenerated headers.

Hopefully this will encourage more people to use the oop.library subsystem, by making it easier to create the necessary headers and stubs for an oop.library class interface.

Note that this is still *completely optional*, but is encouraged.

Plans to extend this to generating Objective C interfaces in the future, as well as autoinit and relbase functionality.

This allows a class interface to be defined, and will create a header file in $(AROS_INCLUDES)/interface/My_Foo.h, where 'My_Foo' is the interface's "interfacename". In the future, this could be extended to generate C++ pure virtual class headers, or Objective C protocol headers.

The header comes complete with aMy_Foo_* attribute enums, pMy_Foo_* messages, moMy_Foo method offsets, and the full assortment of interface stubs.

To define a class interface, add to the .conf file of your base class:

     ##begin interface
     ##begin config
     interfaceid my.foo
     interfacename My_Foo
     methodstub myFoo         # Optional, default to interfacename
     methodbase MyFooBase
     attributebase MyFooAttrBase
     ##end config

     ##begin attributelist
     ULONG FooType   # [ISG] Type of the Foo
     BOOL IsBar      # [..G] Is this a Bar also? <- comments are preserved!
     ##end attributelist

     ##begin methodlist
     VOID Start(ULONG numfoos)  # This comment will appear in the header
     BOOL Running()
     .skip 1 # BOOL IsStopped() Disabled obsolete function
     VOID KillAll(struct TagItem *attrList)
     ##end methodlist
     ##end interface

Documentation

[edit | edit source]

It would be nice if we could just upload to diff (maybe as zip file) and then the patching is automatically done.

If you have a local copy of the whole website, you can update only the file(s) that are changed with a rsync-type script (maybe rsync itself works for the purpose).

# Your c++ files
CXX_FILES :=  main.cpp debug.cpp subdir/module.cpp

# subdir slashes are replaced by three underscores
CXX_OBJS := $(addprefix $(GENDIR)/$(CURDIR)/, $(addsuffix .o, $(subst
/,___,$(CXX_FILES)) ) )

CXX_FLAGS := -W -Wall -Wno-long-long -fbounds-check

CXX_CC = $(TOOLDIR)/crosstools/$(AROS_TARGET_CPU)-aros-g++

CXX_DEPS := $(patsubst %.o,%.d,$(CXX_OBJS))

$(CXX_DEPS):
    @echo Makedepend $(patsubst %.d,%.cpp,$(subst ___,/,$(notdir $@)))...
    @$(CXX_CC) $(CXX_FLAGS) -MM -MT $(patsubst %.d,%.o,$@) -o $@ $(patsubst
%.d,%.cpp,$(subst ___,/,$(notdir $@)))
    @echo $@: $(patsubst %.d,%.cpp,$(subst ___,/,$(notdir $@))) >>$@

-include $(CXX_DEPS)

$(CXX_OBJS):
%compile_q \
        cmd=$(CXX_CC) \
        opt=$(CXX_FLAGS) \
        from="$(patsubst %.o,%.cpp,$(subst ___,/,$(notdir $@)))" \
        to=$@
  1. Make sure your target depends on both deps and objs emumiga-library: $(CXX_DEPS) $(CXX_OBJS)

The AROS build system

arch/common is for drivers where it is difficult to say to which CPU and/or arch they belong: for example a graphics driver using the PCI API could as well run inside hosted linux as on PPC native.

Then it's arch-independent code and it should be in fact outside of arch. Currently they are in workbench/devs/drivers. Can be discussed, but looks like it's just a matter of being used up to a particular location. At least no one changed this.

Even if it's not specific to a particular platform, the code in arch/common is hardware dependent, whereas the code in rom/ and workbench/ is supposed to be non-hardware-specific. This has been discussed before when you moved other components (e.g. ata.device) from arch/common to rom/devs. IIRC you accepted that that move was inappropriate in retrospect (but didn't undo it).

Having said that, arch/all-pc might be a good place for components shared between i386-pc and x86_64-pc such as the timer HIDD. On further inspection it seems that most drivers are already in workbench/hidds.

Introduction

[edit | edit source]

AROS build system is based around GNU toolchain. This means we use gcc as our compiler, and the build system needs a POSIX environment to run.

Currently AROS has been successfully build using the following environments:

  • Linux, various architectures and distributions. This has been, for a long time, a primary development platform. Most of our nightly builds are running under Linux.
  • MacOS X (more technically known as Darwin).
  • Cygwin, running Windows.
  • MinGW/MSYS, running Windows (both 32-bit and 64-bit version of MinGW was tested).

From these two Windows environments, MinGW is the preferred one, because of significantly faster (compared to Cygwin) operation. There's, however, a known problem: if you want to build native port, GRUB2 can't be built. Its own build system fails is currently incompatible with MinGW, and will fail. You can work around it if you use—with-bootloader=none argument when configuring AROS. This will disable building the primary bootloader. You can perfectly live with that if you already have GRUB installed.

Running on a host whose binary format is different from ELF (i. e. Darwin and Windows), requires you to use native AROS-targeted crosstoolchain. It can be built together with AROS, however using a standalone preinstalled toolchain significantly shortens the build time and saves up your drive space. A pretty good set of prebuilt toolchains for Darwin and Windows can be obtained from AROS Archives.

Cross-compiling a hosted version of AROS requires you, additionally, to have the second crosstoolchain, targeted to what will be your host. For example if you're building Windows-hosted AROS under Linux, you'll need Windows-targeted crosstoolchain. Because of this, building a hosted version is best to be done on the same system it will run on.

In the past, configure found e.g. i386-elf-gcc etc. on the path during a cross-compile without passing special options. I'd like to retain that capability. That *should* still work if you pass in—disable-crosstools.

Remember, --enable-crosstools is the default now – and it would be silly to use the external crosstools is AROS is just going to build its own anyway.

For the kernel tools though, yes, I definitely agree. Let me know if you have a system where the kernel tool type isn't detected properly.

Making use of threaded builds (make -j X)? if not it might be worth using. Please don't; vps is a virtual machine also running some web sites. I don't want to fully starve the rest that is running on that machine. I appreciate what you are saying – but without info on the virtualised hardware cant really comment. How many "cores" does the vm have? if it has >2, don't see why adding an additional thread (make -j 2) should cause any noticeable difference to the web services it also hosts?

26 February 2012, configure has been restructured, to generate three sets of *_*_{cc,as,objdump,...} definitions.

If we are building crosstools:

orig_target_* - AROS built toolchain (in bin/{host}/tools/crosstools/....)

aros_kernel_* - External toolchain, if—with-kernel-tool-prefix, or the architecture configure it as such

               (ie hosted archs) Otherwise, it points to the orig_target_* tools

aros_target_* - AROS target tools (in bin/{host}/tools/${target_tool_prefix}-*

If we are *not* building crosstools: (--disable-crosstools, or—with-crosstools=...)

aros_kernel_* - External toolchain (required, and configure should be checking for it!)

orig_target_* - Points to aros_kernel_*

aros_target_* - AROS target tools (in bin/{host}/tools/${target_tool_prefix}-*

modified collect-aros to mark ABIv1 ELF files with EI_OSABI of 15 (AROS) instead of 0 (generic Unix). For now, I'm going to hold off on the change to refuse to load ABIv0 files (with EI_OSABI of 0) until I can get some more testing done (since dos/internalloadseg_elf.c is reused in a few places).

A separate change to have ABIv0 refuse to load ABIv1 applications will need to be made. The patch to have ABIv1 refuse to load ABIv0 applications will come in the near future.

Custom tools

[edit | edit source]
../$srcdir/configure" --target=linux-i386—enable-debug=all—with-portssources="$curdir/$portsdir

Always use the 'tools/crosstools' compiler to build contrib/gnu/gcc. AFAIK this was the previous solution, using TARGET_CC override in mmakefile...

the host toolchain should only be used for compiling the tools (genmodule, elf2hunk, etc), and for the bootstrap (ie AROSBootstrap on linux-hosted and grub2 for pc-*). To be exact, the 'kernel' compiler is used for compiling GRUB2, and probably AROSBootstrap too. This is important when cross-compiling.

How about we invert the sense of --enable-crosstools? We make it '--disable-crosstools', and crosstools=yes is on by default? That way we can support new arch bringup (if we don't have working crosstools yet), but 'most people' won have to deal with the issues of, say, having C compiled with (host) gcc 4.6.1, but C++ compiled with (crosstools) gcc 4.2

add-symbol-file boot/aros-bsp-linux 0xf7b14000
add-symbol-file boot/aros-base 0xf7b6a910

There's "loadkick" gdb command now which does this auomatically. Btw, don't use add-symbol-file. Use "loadseg <address>".

You need to run as you have a stale config.

$ ./config.status—recheck && ./config.status

In the end I would like to get rid of the mmakefile parsing by mmake. What I would like to put in place is that mmake calls the command: 'make -f mmakefile __MM__' and it parses the output of that command. The mmakefile would the be full of statements like:

__MM__ ::
    echo metatarget : prerequisite1 prerequisite2

This could be generated by genmf macros or gmake functions.

I think this approach would give some advantages:

 The parsing code in mmake would become simpler:
  * No need to discard non-#MM lines or at least reduce it significantly
  * No need for line continuation handling
  * No need for variable substitution
 Rule generation in mmakefile would become more flexible. To generate
  the output one could use all facilities provided by gmake: if
  statements, functions, complex variable substitutions.
  For example: providing arch specific or configuration dependent rules
  would become much easier.
 This architecture would be much easier to extend to other make(-like)
  tools like cmake, scons, ... This would for example allow to
  gradually convert out genmf+gmake build system to a scons based one.
  External code could choose their prefered method: the AROS SDK would
  support several systems.

Would like to express the following 'build all libraries I depend on' concept in Metamake:

MODULE=testmod
USELIBS=dos graphics utility

$(MODULE)-linklib: core-linklibs $(addsuffix -includes,$(USELIBS))
$(addsuffix -linklib,$(USELIBS))

At the moment it not possible as mmake is a static scanner and does not support loops or function like $(addsuffix ...). Look in the AROS dev maillist for a thread called 'mmake RFC' (in Aug 2010) describing my idea. If you look at the svn log of tools/MetaMake there is r34165 'Started to write a function which calls the _MM_ target in a mmakefile. ...'

Can see this breaking because it wont know which "parent" metatarget(s) to invoke to build the prerequisites based on the object files / binaries alone, unless you add a dependancy on the (relevant) metatarget for every binary produced. i.e it would be like doing "make <prerequisites-metatarget>-quick" for the prerequisite. Yes, each module target would get an extra linklib-modulename target. (not linklib-kernel-dos, just linklib-dos, for example).

mmake at the moment only knows about metatargets and metadependencies. It does not handle real files or knows when something is old or new. Therefore it always has to try all metadependencies and make will find out if it is up to date or needs to be rebuilt. This can be changed to also let mmake dependencies on real files (e.g. the .c files for a shared library); remember when something was last build and check if files have changed. But this won't be a small change. IS there some way we can pass info about the file types in the "files=" parameter, so that the macros can automatically pass the files to the necessary utility macros?

CFILES := example1
CPPFILES := example2
ASMFILES := example3

%build_prog mmake=foo-bar \
    progname=Example
files="c'$(CFILES)',cpp'$(CPPFILES)',asm'$(ASMFILES)'"
targetdir=$(AROS_TESTS) \
    uselibs="amiga arosc"

IMO uselibs= should only be needed when non-standard libraries are used. In my ABI V1 I even made a patch to remove all standard libs from the uselibs= statement. I do plan to submit this again sometime in the future. And there should not be a need to add these libs to uselibs=. linklibs that are standardly linked should be build by the linklibs-core metatarget. %build_module takes care of the linklinbs-core dependency. Currently a lot of linklibs are not dependent of this metatarger because a lot of the standard libs autoopened by libautoinit.a. TBH, I also find it a bit weird. Standard libraries don't need -lXXX, because they "link" via proto files, right?

They are (currently) only used for the linklibs-<foo> dependency autogeneration. Was under the impression you wanted to move all the per-library autoinit code back to the specific libraries? Yes to avoid the current mismatch between versions in libautoinit and libxxx.a.

for %build_prog and some others so it might seem logical to add another cppfiles But then we might need to add dfiles, modfiles or pfiles for the D language, Modula-2 and Pascal as well in the future so your idea about adding it all to the files parameter in one way or another seems to be more future proof to me.

Personally, I'd prefer to let make.tmpl figure it all out from the extensions, even though it'd be a large changeset to fix all the FILES=lines.

FILES = foobar.c \
        qux.cpp \
        bar.S \
        xyyzy.mod

%build_prog mmake=foo-bar
    progname=Example files="$(FILES)" \
    targetdir=$(AROS_TESTS) uselibs="frobozz"

By the way: what are the 'standard libraries'? That is to be discussed. I would include almost all libs in our workbench/libs and rom/ directories unless there is a good reason not to use it as a standard linklib. mesa will always require -lGL to be passed because AROSMesaGetProcAddress is only present in linklib. Also nobody will write code #include <proto/mesa.h>. All code will have #include <GL/gl.h>

working on minimal-version autoopening, to enhance binary compatibility with m68k and PPC AOS flavors. To be clear I like the feature you are implementing, I don't like it that programmers have to specify a long list of libs to uselibs= all the time.

Does this give the programmer a way to specify that he'll need more than the minimum for a function? For example, one aspect of a function may have been buggy/unimplemented in the first version. If that aspect is used, a version is needed that supports it properly.

Yes, in the library.conf file, you would use:

foo.conf

...
.version 33
ULONG FooUpdate(struct Foo *foo)
ULONG FooVersion()
# NOTE: The version 33 FooSet() didn't work at all!
#       It was fixed in version 34.
.version 34
ULONG FooSet(struct Foo *foo, ULONG key, ULONG val)
.version 33
ULONG FooGet(struct Foo *foo, ULONG key)
...

Then, if you use FooSet(), you'll get version 34 of the library, but if your code never calls FooSet(), you'll only OpenLibrary() version 33.

OpenLibrary requiring version 34 in one case and 37 in the other, depending on whether I needed that specific NULL-handling aspect of FooSet(). How will this work with otherwise automatic determination of minimum versions?

Uh... You'll have the handle library loading yourself, then:

APTR Foo;

if (IAmOnABrokenA1000()) {
   Foo = OpenLibrary("foo.library",34);
} else if (TheA3000ReallyNeedsVersion37()) {
   Foo = OpenLibrary("foo.library",37);
} else {
   /* Put your hands in the air like you just don't care! */
   Alert(AT_DeadEnd);
}
Syntax of the makefile
[edit | edit source]

Where do I need to make the changes to add 'contrib' to the amiga-m68k build process? You need to study scripts in /AROS/scripts/nightly/pkg and get some knowledge from them. Neil can probably give you better explanation. Contrib

华夏公益教科书