spack package

Subpackages

Submodules

spack.abi module

class spack.abi.ABI

Bases: object

This class provides methods to test ABI compatibility between specs. The current implementation is rather rough and could be improved.

architecture_compatible(parent, child)

Return true if parent and child have ABI compatible targets.

compatible(parent, child, **kwargs)

Returns true iff a parent and child spec are ABI compatible

compiler_compatible(parent, child, **kwargs)

Return true if compilers for parent and child are ABI compatible.

spack.architecture module

This module contains all the elements that are required to create an architecture object. These include, the target processor, the operating system, and the architecture platform (i.e. cray, darwin, linux, bgq, etc) classes.

On a multiple architecture machine, the architecture spec field can be set to build a package against any target and operating system that is present on the platform. On Cray platforms or any other architecture that has different front and back end environments, the operating system will determine the method of compiler detection.

There are two different types of compiler detection:
  1. Through the $PATH env variable (front-end detection)
  2. Through the tcl module system. (back-end detection)

Depending on which operating system is specified, the compiler will be detected using one of those methods.

For platforms such as linux and darwin, the operating system is autodetected and the target is set to be x86_64.

The command line syntax for specifying an architecture is as follows:

target=<Target name> os=<OperatingSystem name>

If the user wishes to use the defaults, either target or os can be left out of the command line and Spack will concretize using the default. These defaults are set in the ‘platforms/’ directory which contains the different subclasses for platforms. If the machine has multiple architectures, the user can also enter front-end, or fe or back-end or be. These settings will concretize to their respective front-end and back-end targets and operating systems. Additional platforms can be added by creating a subclass of Platform and adding it inside the platform directory.

Platforms are an abstract class that are extended by subclasses. If the user wants to add a new type of platform (such as cray_xe), they can create a subclass and set all the class attributes such as priority, front_target, back_target, front_os, back_os. Platforms also contain a priority class attribute. A lower number signifies higher priority. These numbers are arbitrarily set and can be changed though often there isn’t much need unless a new platform is added and the user wants that to be detected first.

Targets are created inside the platform subclasses. Most architecture (like linux, and darwin) will have only one target (x86_64) but in the case of Cray machines, there is both a frontend and backend processor. The user can specify which targets are present on front-end and back-end architecture

Depending on the platform, operating systems are either auto-detected or are set. The user can set the front-end and back-end operating setting by the class attributes front_os and back_os. The operating system as described earlier, will be responsible for compiler detection.

class spack.architecture.Arch(plat=None, os=None, target=None)

Bases: object

Architecture is now a class to help with setting attributes.

TODO: refactor so that we don’t need this class.

concrete
static from_dict(d)
to_dict()
exception spack.architecture.NoPlatformError

Bases: spack.error.SpackError

class spack.architecture.OperatingSystem(name, version)

Bases: object

Operating System will be like a class similar to platform extended by subclasses for the specifics. Operating System will contain the compiler finding logic. Instead of calling two separate methods to find compilers we call find_compilers method for each operating system

find_compiler(cmp_cls, *path)

Try to find the given type of compiler in the user’s environment. For each set of compilers found, this returns compiler objects with the cc, cxx, f77, fc paths and the version filled in.

This will search for compilers with the names in cc_names, cxx_names, etc. and it will group them if they have common prefixes, suffixes, and versions. e.g., gcc-mp-4.7 would be grouped with g++-mp-4.7 and gfortran-mp-4.7.

find_compilers(*paths)

Return a list of compilers found in the supplied paths. This invokes the find() method for each Compiler class, and appends the compilers detected to a list.

to_dict()
class spack.architecture.Platform(name)

Bases: object

Abstract class that each type of Platform will subclass. Will return a instance of it once it is returned.

add_operating_system(name, os_class)

Add the operating_system class object into the platform.operating_sys dictionary

add_target(name, target)

Used by the platform specific subclass to list available targets. Raises an error if the platform specifies a name that is reserved by spack as an alias.

back_end = None
back_os = None
default = None
default_os = None
classmethod detect()

Subclass is responsible for implementing this method. Returns True if the Platform class detects that it is the current platform and False if it’s not.

front_end = None
front_os = None
operating_system(name)
priority = None
reserved_oss = ['default_os', 'frontend', 'fe', 'backend', 'be']
reserved_targets = ['default_target', 'frontend', 'fe', 'backend', 'be']
classmethod setup_platform_environment(pkg, env)

Subclass can override this method if it requires any platform-specific build environment modifications.

target(name)

This is a getter method for the target dictionary that handles defaulting based on the values provided by default, front-end, and back-end. This can be overwritten by a subclass for which we want to provide further aliasing options.

class spack.architecture.Target(name, module_name=None)

Bases: object

Target is the processor of the host machine. The host machine may have different front-end and back-end targets, especially if it is a Cray machine. The target will have a name and also the module_name (e.g craype-compiler). Targets will also recognize which platform they came from using the set_platform method. Targets will have compiler finding strategies

spack.architecture.arch_for_spec(arch_spec)

Transforms the given architecture spec into an architecture objct.

spack.architecture.get_platform(platform_name)

Returns a platform object that corresponds to the given name.

spack.architecture.verify_platform(platform_name)

Determines whether or not the platform with the given name is supported in Spack. For more information, see the ‘spack.platforms’ submodule.

spack.binary_distribution module

exception spack.binary_distribution.NoChecksumException

Bases: exceptions.Exception

exception spack.binary_distribution.NoGpgException

Bases: exceptions.Exception

exception spack.binary_distribution.NoKeyException

Bases: exceptions.Exception

exception spack.binary_distribution.NoOverwriteException

Bases: exceptions.Exception

exception spack.binary_distribution.NoVerifyException

Bases: exceptions.Exception

exception spack.binary_distribution.PickKeyException

Bases: exceptions.Exception

spack.binary_distribution.build_tarball(spec, outdir, force=False, rel=False, yes_to_all=False, key=None)

Build a tarball from given spec and put it into the directory structure used at the mirror (following <tarball_directory_name>).

spack.binary_distribution.buildinfo_file_name(prefix)

Filename of the binary package meta-data file

spack.binary_distribution.checksum_tarball(file)
spack.binary_distribution.download_tarball(spec)

Download binary tarball for given package into stage area Return True if successful

spack.binary_distribution.extract_tarball(spec, filename, yes_to_all=False, force=False)

extract binary tarball for given package into install area

spack.binary_distribution.generate_index(outdir, indexfile_path)
spack.binary_distribution.get_keys(install=False, yes_to_all=False, force=False)

Get pgp public keys available on mirror

spack.binary_distribution.get_specs(force=False)

Get spec.yaml’s for build caches available on mirror

spack.binary_distribution.has_gnupg2()
spack.binary_distribution.make_package_relative(workdir, prefix)

Change paths in binaries to relative paths

spack.binary_distribution.read_buildinfo_file(prefix)

Read buildinfo file

spack.binary_distribution.relocate_package(prefix)

Relocate the given package

spack.binary_distribution.sign_tarball(yes_to_all, key, force, specfile_path)
spack.binary_distribution.tarball_directory_name(spec)

Return name of the tarball directory according to the convention <os>-<architecture>/<compiler>/<package>-<version>/

spack.binary_distribution.tarball_name(spec, ext)

Return the name of the tarfile according to the convention <os>-<architecture>-<package>-<dag_hash><ext>

spack.binary_distribution.tarball_path_name(spec, ext)

Return the full path+name for a given spec according to the convention <tarball_directory_name>/<tarball_name>

spack.binary_distribution.write_buildinfo_file(prefix, rel=False)

Create a cache file containing information required for the relocation

spack.build_environment module

This module contains all routines related to setting up the package build environment. All of this is set up by package.py just before install() is called.

There are two parts to the build environment:

  1. Python build environment (i.e. install() method)

    This is how things are set up when install() is called. Spack takes advantage of each package being in its own module by adding a bunch of command-like functions (like configure(), make(), etc.) in the package’s module scope. Ths allows package writers to call them all directly in Package.install() without writing ‘self.’ everywhere. No, this isn’t Pythonic. Yes, it makes the code more readable and more like the shell script from which someone is likely porting.

  2. Build execution environment

    This is the set of environment variables, like PATH, CC, CXX, etc. that control the build. There are also a number of environment variables used to pass information (like RPATHs and other information about dependencies) to Spack’s compiler wrappers. All of these env vars are also set up here.

Skimming this module is a nice way to get acquainted with the types of calls you can make from within the install() function.

exception spack.build_environment.ChildError(msg, module, classname, traceback_string, build_log, context)

Bases: spack.build_environment.InstallError

Special exception class for wrapping exceptions from child processes
in Spack’s build environment.

The main features of a ChildError are:

  1. They’re serializable, so when a child build fails, we can send one of these to the parent and let the parent report what happened.
  2. They have a traceback field containing a traceback generated on the child immediately after failure. Spack will print this on failure in lieu of trying to run sys.excepthook on the parent process, so users will see the correct stack trace from a child.
  3. They also contain context, which shows context in the Package implementation where the error happened. This helps people debug Python code in their packages. To get it, Spack searches the stack trace for the deepest frame where self is in scope and is an instance of PackageBase. This will generally find a useful spot in the package.py file.

The long_message of a ChildError displays one of two things:

  1. If the original error was a ProcessError, indicating a command died during the build, we’ll show context from the build log.
  2. If the original error was any other type of error, we’ll show context from the Python code.

SpackError handles displaying the special traceback if we’re in debug mode with spack -d.

build_errors = [('spack.util.executable', 'ProcessError')]
long_message
exception spack.build_environment.InstallError(message, long_message=None)

Bases: spack.error.SpackError

Raised by packages when a package fails to install.

Any subclass of InstallError will be annotated by Spack wtih a pkg attribute on failure, which the caller can use to get the package for which the exception was raised.

class spack.build_environment.MakeExecutable(name, jobs)

Bases: spack.util.executable.Executable

Special callable executable object for make so the user can specify parallel or not on a per-invocation basis. Using ‘parallel’ as a kwarg will override whatever the package’s global setting is, so you can either default to true or false and override particular calls.

Note that if the SPACK_NO_PARALLEL_MAKE env var is set it overrides everything.

spack.build_environment.fork(pkg, function, dirty, fake)

Fork a child process to do part of a spack build.

Parameters:
  • pkg (PackageBase) – package whose environment we should set up the forked process for.
  • function (callable) – argless function to run in the child process.
  • dirty (bool) – If True, do NOT clean the environment before building.
  • fake (bool) – If True, skip package setup b/c it’s not a real build

Usage:

def child_fun():
    # do stuff
build_env.fork(pkg, child_fun)

Forked processes are run with the build environment set up by spack.build_environment. This allows package authors to have full control over the environment, etc. without affecting other builds that might be executed in the same spack call.

If something goes wrong, the child process catches the error and passes it to the parent wrapped in a ChildError. The parent is expected to handle (or re-raise) the ChildError.

spack.build_environment.get_package_context(traceback, context=3)

Return some context for an error message when the build fails.

Parameters:
  • traceback (traceback) – A traceback from some exception raised during install
  • context (int) – Lines of context to show before and after the line where the error happened

This function inspects the stack to find where we failed in the package file, and it adds detailed context to the long_message from there.

spack.build_environment.get_rpath_deps(pkg)

Return immediate or transitive RPATHs depending on the package.

spack.build_environment.get_rpaths(pkg)

Get a list of all the rpaths for a package.

spack.build_environment.get_std_cmake_args(pkg)

List of standard arguments used if a package is a CMakePackage.

Returns:standard arguments that would be used if this package were a CMakePackage instance.
Return type:list of str
Parameters:pkg (PackageBase) – package under consideration
Returns:arguments for cmake
Return type:list of str
spack.build_environment.load_external_modules(pkg)

Traverse a package’s spec DAG and load any external modules.

Traverse a package’s dependencies and load any external modules associated with them.

Parameters:pkg (PackageBase) – package to load deps for
spack.build_environment.parent_class_modules(cls)

Get list of super class modules that are all descend from spack.Package

spack.build_environment.set_build_environment_variables(pkg, env, dirty)

Ensure a clean install environment when we build packages.

This involves unsetting pesky environment variables that may affect the build. It also involves setting environment variables used by Spack’s compiler wrappers.

Parameters:
  • pkg – The package we are building
  • env – The build environment
  • dirty (bool) – Skip unsetting the user’s environment settings
spack.build_environment.set_compiler_environment_variables(pkg, env)
spack.build_environment.set_module_variables_for_package(pkg, module)

Populate the module scope of install() with some useful functions. This makes things easier for package writers.

spack.build_environment.setup_package(pkg, dirty)

Execute all environment setup routines.

spack.compiler module

class spack.compiler.Compiler(cspec, operating_system, target, paths, modules=[], alias=None, environment=None, extra_rpaths=None, **kwargs)

Bases: object

This class encapsulates a Spack “compiler”, which includes C, C++, and Fortran compilers. Subclasses should implement support for specific compilers, their possible names, arguments, and how to identify the particular type of compiler.

PrgEnv = None
PrgEnv_compiler = None
cc_names = []
cc_rpath_arg
classmethod cc_version(cc)
cxx11_flag
cxx14_flag
cxx17_flag
cxx_names = []
cxx_rpath_arg
classmethod cxx_version(cxx)
classmethod default_version(cc)

Override just this to override all compiler version functions.

f77_names = []
f77_rpath_arg
classmethod f77_version(f77)
fc_names = []
fc_rpath_arg
classmethod fc_version(fc)
openmp_flag
prefixes = []
setup_custom_environment(pkg, env)

Set any environment variables necessary to use the compiler.

suffixes = ['-.*']
version
spack.compiler.get_compiler_version(compiler_path, version_arg, regex='(.*)')

spack.concretize module

Functions here are used to take abstract specs and make them concrete. For example, if a spec asks for a version between 1.8 and 1.9, these functions might take will take the most recent 1.9 version of the package available. Or, if the user didn’t specify a compiler for a spec, then this will assign a compiler to the spec based on defaults or user preferences.

TODO: make this customizable and allow users to configure
concretization policies.
class spack.concretize.DefaultConcretizer

Bases: object

This class doesn’t have any state, it just provides some methods for concretization. You can subclass it to override just some of the default concretization strategies, or you can override all of them.

choose_virtual_or_external(spec)

Given a list of candidate virtual and external packages, try to find one that is most ABI compatible.

concretize_architecture(spec)

If the spec is empty provide the defaults of the platform. If the architecture is not a string type, then check if either the platform, target or operating system are concretized. If any of the fields are changed then return True. If everything is concretized (i.e the architecture attribute is a namedtuple of classes) then return False. If the target is a string type, then convert the string into a concretized architecture. If it has no architecture and the root of the DAG has an architecture, then use the root otherwise use the defaults on the platform.

concretize_compiler(spec)

If the spec already has a compiler, we’re done. If not, then take the compiler used for the nearest ancestor with a compiler spec and use that. If the ancestor’s compiler is not concrete, then used the preferred compiler as specified in spackconfig.

Intuition: Use the spackconfig default if no package that depends on this one has a strict compiler requirement. Otherwise, try to build with the compiler that will be used by libraries that link to this one, to maximize compatibility.

concretize_compiler_flags(spec)

The compiler flags are updated to match those of the spec whose compiler is used, defaulting to no compiler flags in the spec. Default specs set at the compiler level will still be added later.

concretize_variants(spec)

If the spec already has variants filled in, return. Otherwise, add the user preferences from packages.yaml or the default variants from the package specification.

concretize_version(spec)

If the spec is already concrete, return. Otherwise take the preferred version from spackconfig, and default to the package’s version if there are no available versions.

TODO: In many cases we probably want to look for installed
versions of each package and use an installed version if we can link to it. The policy implemented here will tend to rebuild a lot of stuff becasue it will prefer a compiler in the spec to any compiler already- installed things were built with. There is likely some better policy that finds some middle ground between these two extremes.
exception spack.concretize.InsufficientArchitectureInfoError(spec, archs)

Bases: spack.error.SpackError

Raised when details on architecture cannot be collected from the system

exception spack.concretize.NoBuildError(spec)

Bases: spack.error.SpackError

Raised when a package is configured with the buildable option False, but no satisfactory external versions can be found

exception spack.concretize.NoCompilersForArchError(arch, available_os_targets)

Bases: spack.error.SpackError

exception spack.concretize.NoValidVersionError(spec)

Bases: spack.error.SpackError

Raised when there is no way to have a concrete version for a particular spec.

exception spack.concretize.UnavailableCompilerVersionError(compiler_spec, arch=None)

Bases: spack.error.SpackError

Raised when there is no available compiler that satisfies a compiler spec.

spack.concretize.find_spec(spec, condition, default=None)

Searches the dag from spec in an intelligent order and looks for a spec that matches a condition

spack.config module

This module implements Spack’s configuration file handling.

This implements Spack’s configuration system, which handles merging multiple scopes with different levels of precedence. See the documentation on Configuration Scopes for details on how Spack’s configuration system behaves. The scopes are:

  1. default
  2. system
  3. site
  4. user

And corresponding per-platform scopes. Important functions in this module are:

get_config reads in YAML data for a particular scope and returns it. Callers can then modify the data and write it back with update_config.

When read in, Spack validates configurations with jsonschemas. The schemas are in submodules of spack.schema.

exception spack.config.ConfigError(message, long_message=None)

Bases: spack.error.SpackError

exception spack.config.ConfigFileError(message, long_message=None)

Bases: spack.config.ConfigError

exception spack.config.ConfigFormatError(validation_error, data)

Bases: spack.config.ConfigError

Raised when a configuration format does not match its schema.

exception spack.config.ConfigSanityError(validation_error, data)

Bases: spack.config.ConfigFormatError

Same as ConfigFormatError, raised when config is written by Spack.

class spack.config.ConfigScope(name, path)

Bases: object

This class represents a configuration scope.

A scope is one directory containing named configuration files. Each file is a config “section” (e.g., mirrors, compilers, etc).

clear()

Empty cached config information.

get_section(section)
get_section_filename(section)
write_section(section)
spack.config.clear_config_caches()

Clears the caches for configuration files, which will cause them to be re-read upon the next request

spack.config.extend_with_default(validator_class)

Add support for the ‘default’ attr for properties and patternProperties.

jsonschema does not handle this out of the box – it only validates. This allows us to set default values for configs where certain fields are None b/c they’re deleted or commented out.

spack.config.get_config(section, scope=None)

Get configuration settings for a section.

If scope is None or not provided, return the merged contents of all of Spack’s configuration scopes. If scope is provided, return only the confiugration as specified in that scope.

This off the top-level name from the YAML section. That is, for a YAML config file that looks like this:

config:
  install_tree: $spack/opt/spack
  module_roots:
    lmod:   $spack/share/spack/lmod

get_config('config') will return:

{ 'install_tree': '$spack/opt/spack',
  'module_roots: {
      'lmod': '$spack/share/spack/lmod'
  }
}
spack.config.get_config_filename(scope, section)

For some scope and section, get the name of the configuration file

spack.config.get_path(path, data)
spack.config.highest_precedence_scope()

Get the scope with highest precedence (prefs will override others).

spack.config.override(string)

Test if a spack YAML string is an override.

See spack_yaml for details. Keys in Spack YAML can end in ::, and if they do, their values completely replace lower-precedence configs instead of merging into them.

spack.config.print_section(section)

Print a configuration to stdout.

spack.config.section_schemas = {'mirrors': {'additionalProperties': False, 'patternProperties': {'mirrors': {'default': {}, 'additionalProperties': False, 'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}, 'type': 'object'}}, '$schema': 'http://json-schema.org/schema#', 'type': 'object', 'title': 'Spack mirror configuration file schema'}, 'repos': {'additionalProperties': False, 'patternProperties': {'repos': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, '$schema': 'http://json-schema.org/schema#', 'type': 'object', 'title': 'Spack repository configuration file schema'}, 'modules': {'title': 'Spack module file configuration file schema', 'patternProperties': {'modules': {'default': {}, 'additionalProperties': False, 'type': 'object', 'properties': {'tcl': {'allOf': [{'$ref': '#/definitions/module_type_configuration'}, {}]}, 'enable': {'default': [], 'items': {'enum': ['tcl', 'dotkit', 'lmod'], 'type': 'string'}, 'type': 'array'}, 'lmod': {'allOf': [{'$ref': '#/definitions/module_type_configuration'}, {'core_compilers': {'$ref': '#/definitions/array_of_strings'}, 'hierarchical_scheme': {'$ref': '#/definitions/array_of_strings'}}]}, 'dotkit': {'allOf': [{'$ref': '#/definitions/module_type_configuration'}, {}]}, 'prefix_inspections': {'patternProperties': {'\\w[\\w-]*': {'$ref': '#/definitions/array_of_strings'}}, 'type': 'object'}}}}, 'additionalProperties': False, 'definitions': {'dependency_selection': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'array_of_strings': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'module_file_configuration': {'default': {}, 'additionalProperties': False, 'type': 'object', 'properties': {'filter': {'default': {}, 'additionalProperties': False, 'type': 'object', 'properties': {'environment_blacklist': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}}, 'load': {'$ref': '#/definitions/array_of_strings'}, 'template': {'type': 'string'}, 'environment': {'default': {}, 'additionalProperties': False, 'type': 'object', 'properties': {'append_path': {'$ref': '#/definitions/dictionary_of_strings'}, 'set': {'$ref': '#/definitions/dictionary_of_strings'}, 'prepend_path': {'$ref': '#/definitions/dictionary_of_strings'}, 'unset': {'$ref': '#/definitions/array_of_strings'}}}, 'prerequisites': {'$ref': '#/definitions/dependency_selection'}, 'autoload': {'$ref': '#/definitions/dependency_selection'}, 'conflict': {'$ref': '#/definitions/array_of_strings'}, 'suffixes': {'$ref': '#/definitions/dictionary_of_strings'}}}, 'dictionary_of_strings': {'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}, 'type': 'object'}, 'module_type_configuration': {'default': {}, 'anyOf': [{'properties': {'blacklist': {'$ref': '#/definitions/array_of_strings'}, 'whitelist': {'$ref': '#/definitions/array_of_strings'}, 'hash_length': {'default': 7, 'minimum': 0, 'type': 'integer'}, 'verbose': {'default': False, 'type': 'boolean'}, 'naming_scheme': {'type': 'string'}}}, {'patternProperties': {'\\w[\\w-]*': {'$ref': '#/definitions/module_file_configuration'}}}], 'type': 'object'}}, '$schema': 'http://json-schema.org/schema#', 'type': 'object'}, 'packages': {'additionalProperties': False, 'patternProperties': {'packages': {'default': {}, 'additionalProperties': False, 'patternProperties': {'\\w[\\w-]*': {'default': {}, 'additionalProperties': False, 'type': 'object', 'properties': {'paths': {'default': {}, 'type': 'object'}, 'providers': {'default': {}, 'additionalProperties': False, 'patternProperties': {'\\w[\\w-]*': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}, 'modules': {'default': {}, 'type': 'object'}, 'buildable': {'default': True, 'type': 'boolean'}, 'version': {'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}, 'type': 'array'}, 'variants': {'oneOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'compiler': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}}}, 'type': 'object'}}, '$schema': 'http://json-schema.org/schema#', 'type': 'object', 'title': 'Spack package configuration file schema'}, 'config': {'additionalProperties': False, 'patternProperties': {'config': {'default': {}, 'type': 'object', 'properties': {'install_tree': {'type': 'string'}, 'install_hash_length': {'minimum': 1, 'type': 'integer'}, 'install_path_scheme': {'type': 'string'}, 'verify_ssl': {'type': 'boolean'}, 'source_cache': {'type': 'string'}, 'build_stage': {'oneOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'build_jobs': {'minimum': 1, 'type': 'integer'}, 'template_dirs': {'items': {'type': 'string'}, 'type': 'array'}, 'checksum': {'type': 'boolean'}, 'misc_cache': {'type': 'string'}, 'dirty': {'type': 'boolean'}, 'module_roots': {'additionalProperties': False, 'type': 'object', 'properties': {'tcl': {'type': 'string'}, 'lmod': {'type': 'string'}, 'dotkit': {'type': 'string'}}}}}}, '$schema': 'http://json-schema.org/schema#', 'type': 'object', 'title': 'Spack module file configuration file schema'}, 'compilers': {'additionalProperties': False, 'patternProperties': {'compilers': {'items': {'compiler': {'additionalProperties': False, 'required': ['paths', 'spec', 'modules', 'operating_system'], 'type': 'object', 'properties': {'environment': {'default': {}, 'additionalProperties': False, 'type': 'object', 'properties': {'set': {'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}, 'type': 'object'}}}, 'paths': {'additionalProperties': False, 'required': ['cc', 'cxx', 'f77', 'fc'], 'type': 'object', 'properties': {'cc': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cxx': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'f77': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'fc': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}}}, 'operating_system': {'type': 'string'}, 'flags': {'additionalProperties': False, 'type': 'object', 'properties': {'cppflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'fflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cxxflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'ldflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'ldlibs': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}}}, 'alias': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'extra_rpaths': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'modules': {'anyOf': [{'type': 'string'}, {'type': 'null'}, {'type': 'array'}]}, 'spec': {'type': 'string'}}}}, 'type': 'array'}}, '$schema': 'http://json-schema.org/schema#', 'type': 'object', 'title': 'Spack compiler configuration file schema'}}

OrderedDict of config scopes keyed by name. Later scopes will override earlier scopes.

spack.config.update_config(section, update_data, scope=None)

Update the configuration file for a particular scope.

Overwrites contents of a section in a scope with update_data, then writes out the config file.

update_data should have the top-level section name stripped off (it will be re-added). Data itself can be a list, dict, or any other yaml-ish structure.

spack.config.validate_scope(scope)

Ensure that scope is valid, and return a valid scope if it is None.

This should be used by routines in config.py to validate scope name arguments, and to determine a default scope where no scope is specified.

spack.config.validate_section(data, schema)

Validate data read in from a Spack YAML file.

This leverages the line information (start_mark, end_mark) stored on Spack YAML structures.

spack.config.validate_section_name(section)

Exit if the section is not a valid section.

spack.database module

Spack’s installation tracking database.

The database serves two purposes:

  1. It implements a cache on top of a potentially very large Spack directory hierarchy, speeding up many operations that would otherwise require filesystem access.
  2. It will allow us to track external installations as well as lost packages and their dependencies.

Prior to the implementation of this store, a directory layout served as the authoritative database of packages in Spack. This module provides a cache and a sanity checking mechanism for what is in the filesystem.

exception spack.database.CorruptDatabaseError(message, long_message=None)

Bases: spack.error.SpackError

Raised when errors are found while reading the database.

class spack.database.Database(root, db_dir=None)

Bases: object

Per-process lock objects for each install prefix.

activated_extensions_for(spec_like, *args, **kwargs)
add(spec_like, *args, **kwargs)
get_record(spec_like, *args, **kwargs)
installed_extensions_for(spec_like, *args, **kwargs)
installed_relatives(spec_like, *args, **kwargs)
missing(spec)
prefix_lock(spec)

Get a lock on a particular spec’s installation directory.

NOTE: The installation directory does not need to exist.

Prefix lock is a byte range lock on the nth byte of a file.

The lock file is spack.store.db.prefix_lock – the DB tells us what to call it and it lives alongside the install DB.

n is the sys.maxsize-bit prefix of the DAG hash. This makes likelihood of collision is very low AND it gives us readers-writer lock semantics with just a single lockfile, so no cleanup required.

prefix_read_lock(*args, **kwds)
prefix_write_lock(*args, **kwds)
query(query_spec=<built-in function any>, known=<built-in function any>, installed=True, explicit=<built-in function any>)

Run a query on the database.

query_spec
Queries iterate through specs in the database and return those that satisfy the supplied query_spec. If query_spec is any, This will match all specs in the database. If it is a spec, we’ll evaluate spec.satisfies(query_spec).

The query can be constrained by two additional attributes:

known

Possible values: True, False, any

Specs that are “known” are those for which Spack can locate a package.py file – i.e., Spack “knows” how to install them. Specs that are unknown may represent packages that existed in a previous version of Spack, but have since either changed their name or been removed.

installed

Possible values: True, False, any

Specs for which a prefix exists are “installed”. A spec that is NOT installed will be in the database if some other spec depends on it but its installation has gone away since Spack installed it.

TODO: Specs are a lot like queries. Should there be a
wildcard spec object, and should specs have attributes like installed and known that can be queried? Or are these really special cases that only belong here?
query_one(query_spec, known=<built-in function any>, installed=True)

Query for exactly one spec that matches the query spec.

Raises an assertion error if more than one spec matches the query. Returns None if no installed package matches.

read_transaction(timeout=60)

Get a read lock context manager for use in a with block.

reindex(directory_layout)

Build database index from scratch based on a directory layout.

Locks the DB if it isn’t locked already.

remove(spec_like, *args, **kwargs)
write_transaction(timeout=60)

Get a write lock context manager for use in a with block.

class spack.database.InstallRecord(spec, path, installed, ref_count=0, explicit=False)

Bases: object

A record represents one installation in the DB.

The record keeps track of the spec for the installation, its install path, AND whether or not it is installed. We need the installed flag in case a user either:

  1. blew away a directory, or
  2. used spack uninstall -f to get rid of it

If, in either case, the package was removed but others still depend on it, we still need to track its spec, so we don’t actually remove from the database until a spec has no installed dependents left.

classmethod from_dict(spec, dictionary)
to_dict()
exception spack.database.InvalidDatabaseVersionError(expected, found)

Bases: spack.error.SpackError

exception spack.database.NonConcreteSpecAddError(message, long_message=None)

Bases: spack.error.SpackError

Raised when attemptint to add non-concrete spec to DB.

spack.dependency module

Data structures that represent Spack’s dependency relationships.

class spack.dependency.Dependency(pkg, spec, type=('build', 'link'))

Bases: object

Class representing metadata for a dependency on a package.

This class differs from spack.spec.DependencySpec because it represents metadata at the Package level. spack.spec.DependencySpec is a descriptor for an actual package confiuguration, while Dependency is a descriptor for a package’s dependency requirements.

A dependency is a requirement for a configuration of another package that satisfies a particular spec. The dependency can have types, which determine how that package configuration is required, e.g. whether it is required for building the package, whether it needs to be linked to, or whether it is needed at runtime so that Spack can call commands from it.

A package can also depend on another package with patches. This is for cases where the maintainers of one package also maintain special patches for their dependencies. If one package depends on another with patches, a special version of that dependency with patches applied will be built for use by the dependent package. The patches are included in the new version’s spec hash to differentiate it from unpatched versions of the same package, so that unpatched versions of the dependency package can coexist with the patched version.

merge(other)

Merge constraints, deptypes, and patches of other into self.

name

Get the name of the dependency package.

spack.dependency.all_deptypes = ('build', 'link', 'run', 'test')

The types of dependency relationships that Spack understands.

spack.dependency.canonical_deptype(deptype)

Convert deptype to a canonical sorted tuple, or raise ValueError.

Parameters:deptype (str or list or tuple) – string representing dependency type, or a list/tuple of such strings. Can also be the builtin function all or the string ‘all’, which result in a tuple of all dependency types known to Spack.
spack.dependency.default_deptype = ('build', 'link')

Default dependency type if none is specified

spack.directives module

This package contains directives that can be used within a package.

Directives are functions that can be called inside a package definition to modify the package, for example:

class OpenMpi(Package):
depends_on(“hwloc”) provides(“mpi”) …

provides and depends_on are spack directives.

The available directives are:

  • version
  • depends_on
  • provides
  • extends
  • patch
  • variant
  • resource
spack.directives.version(*args, **kwargs)

Adds a version and metadata describing how to fetch it. Metadata is just stored as a dict in the package’s versions dictionary. Package must turn it into a valid fetch strategy later.

spack.directives.conflicts(*args, **kwargs)

Allows a package to define a conflict.

Currently, a “conflict” is a concretized configuration that is known to be non-valid. For example, a package that is known not to be buildable with intel compilers can declare:

conflicts('%intel')

To express the same constraint only when the ‘foo’ variant is activated:

conflicts('%intel', when='+foo')
Parameters:
  • conflict_spec (Spec) – constraint defining the known conflict
  • when (Spec) – optional constraint that triggers the conflict
  • msg (str) – optional user defined message
spack.directives.depends_on(*args, **kwargs)

Creates a dict of deps with specs defining when they apply.

Parameters:
  • spec (Spec or str) – the package and constraints depended on
  • when (Spec or str) – when the dependent satisfies this, it has the dependency represented by spec
  • type (str or tuple of str) – str or tuple of legal Spack deptypes
  • patches (obj or list) – single result of patch() directive, a str to be passed to patch, or a list of these

This directive is to be used inside a Package definition to declare that the package requires other packages to be built first. @see The section “Dependency specs” in the Spack Packaging Guide.

spack.directives.extends(*args, **kwargs)

Same as depends_on, but dependency is symlinked into parent prefix.

This is for Python and other language modules where the module needs to be installed into the prefix of the Python installation. Spack handles this by installing modules into their own prefix, but allowing ONE module version to be symlinked into a parent Python install at a time.

keyword arguments can be passed to extends() so that extension packages can pass parameters to the extendee’s extension mechanism.

spack.directives.provides(*args, **kwargs)

Allows packages to provide a virtual dependency. If a package provides ‘mpi’, other packages can declare that they depend on “mpi”, and spack can use the providing package to satisfy the dependency.

spack.directives.patch(*args, **kwargs)

Packages can declare patches to apply to source. You can optionally provide a when spec to indicate that a particular patch should only be applied when the package’s spec meets certain conditions (e.g. a particular version).

Parameters:
  • url_or_filename (str) – url or filename of the patch
  • level (int) – patch level (as in the patch shell command)
  • when (Spec) – optional anonymous spec that specifies when to apply the patch
  • working_dir (str) – dir to change to before applying
Keyword Arguments:
 
  • sha256 (str) – sha256 sum of the patch, used to verify the patch (only required for URL patches)
  • archive_sha256 (str) – sha256 sum of the archive, if the patch is compressed (only required for compressed URL patches)
spack.directives.variant(*args, **kwargs)

Define a variant for the package. Packager can specify a default value as well as a text description.

Parameters:
  • name (str) – name of the variant
  • default (str or bool) – default value for the variant, if not specified otherwise the default will be False for a boolean variant and ‘nothing’ for a multi-valued variant
  • description (str) – description of the purpose of the variant
  • values (tuple or callable) – either a tuple of strings containing the allowed values, or a callable accepting one value and returning True if it is valid
  • multi (bool) – if False only one value per spec is allowed for this variant
  • validator (callable) – optional group validator to enforce additional logic. It receives a tuple of values and should raise an instance of SpackError if the group doesn’t meet the additional constraints
spack.directives.resource(*args, **kwargs)

Define an external resource to be fetched and staged when building the package. Based on the keywords present in the dictionary the appropriate FetchStrategy will be used for the resource. Resources are fetched and staged in their own folder inside spack stage area, and then moved into the stage area of the package that needs them.

List of recognized keywords:

  • ‘when’ : (optional) represents the condition upon which the resource is needed
  • ‘destination’ : (optional) path where to move the resource. This path must be relative to the main package stage area.
  • ‘placement’ : (optional) gives the possibility to fine tune how the resource is moved into the main package stage area.

spack.directory_layout module

class spack.directory_layout.DirectoryLayout(root)

Bases: object

A directory layout is used to associate unique paths with specs. Different installations are going to want differnet layouts for their install, and they can use this to customize the nesting structure of spack installs.

all_specs()

To be implemented by subclasses to traverse all specs for which there is a directory within the root.

check_installed(spec)

Checks whether a spec is installed.

Return the spec’s prefix, if it is installed, None otherwise.

Raise an exception if the install is inconsistent or corrupt.

create_install_directory(spec)

Creates the installation directory for a spec.

hidden_file_paths

Return a list of hidden files used by the directory layout.

Paths are relative to the root of an install directory.

If the directory layout uses no hidden files to maintain state, this should return an empty container, e.g. [] or (,).

path_for_spec(spec)

Return absolute path from the root to a directory for the spec.

relative_path_for_spec(spec)

Implemented by subclasses to return a relative path from the install root to a unique location for the provided spec.

remove_install_directory(spec)

Removes a prefix and any empty parent directories from the root. Raised RemoveFailedError if something goes wrong.

exception spack.directory_layout.DirectoryLayoutError(message, long_msg=None)

Bases: spack.error.SpackError

Superclass for directory layout errors.

exception spack.directory_layout.ExtensionAlreadyInstalledError(spec, ext_spec)

Bases: spack.directory_layout.DirectoryLayoutError

Raised when an extension is added to a package that already has it.

exception spack.directory_layout.ExtensionConflictError(spec, ext_spec, conflict)

Bases: spack.directory_layout.DirectoryLayoutError

Raised when an extension is added to a package that already has it.

class spack.directory_layout.ExtensionsLayout(root, **kwargs)

Bases: object

A directory layout is used to associate unique paths with specs for package extensions. Keeps track of which extensions are activated for what package. Depending on the use case, this can mean globally activated extensions directly in the installation folder - or extensions activated in filesystem views.

add_extension(spec, ext_spec)

Add to the list of currently installed extensions.

check_activated(spec, ext_spec)

Ensure that ext_spec can be removed from spec.

If not, raise NoSuchExtensionError.

check_extension_conflict(spec, ext_spec)

Ensure that ext_spec can be activated in spec.

If not, raise ExtensionAlreadyInstalledError or ExtensionConflictError.

extendee_target_directory(extendee)

Specify to which full path extendee should link all files from extensions.

extension_map(spec)

Get a dict of currently installed extension packages for a spec.

Dict maps { name : extension_spec } Modifying dict does not affect internals of this layout.

remove_extension(spec, ext_spec)

Remove from the list of currently installed extensions.

exception spack.directory_layout.InconsistentInstallDirectoryError(message, long_msg=None)

Bases: spack.directory_layout.DirectoryLayoutError

Raised when a package seems to be installed to the wrong place.

exception spack.directory_layout.InstallDirectoryAlreadyExistsError(path)

Bases: spack.directory_layout.DirectoryLayoutError

Raised when create_install_directory is called unnecessarily.

exception spack.directory_layout.InvalidDirectoryLayoutParametersError(message, long_msg=None)

Bases: spack.directory_layout.DirectoryLayoutError

Raised when a invalid directory layout parameters are supplied

exception spack.directory_layout.InvalidExtensionSpecError(message, long_msg=None)

Bases: spack.directory_layout.DirectoryLayoutError

Raised when an extension file has a bad spec in it.

exception spack.directory_layout.NoSuchExtensionError(spec, ext_spec)

Bases: spack.directory_layout.DirectoryLayoutError

Raised when an extension isn’t there on deactivate.

exception spack.directory_layout.RemoveFailedError(installed_spec, prefix, error)

Bases: spack.directory_layout.DirectoryLayoutError

Raised when a DirectoryLayout cannot remove an install prefix.

exception spack.directory_layout.SpecHashCollisionError(installed_spec, new_spec)

Bases: spack.directory_layout.DirectoryLayoutError

Raised when there is a hash collision in an install layout.

exception spack.directory_layout.SpecReadError(message, long_msg=None)

Bases: spack.directory_layout.DirectoryLayoutError

Raised when directory layout can’t read a spec.

class spack.directory_layout.YamlDirectoryLayout(root, **kwargs)

Bases: spack.directory_layout.DirectoryLayout

By default lays out installation directories like this::
<install root>/
<platform-os-target>/
<compiler>-<compiler version>/
<name>-<version>-<hash>

The hash here is a SHA-1 hash for the full DAG plus the build spec. TODO: implement the build spec.

The installation directory scheme can be modified with the arguments hash_len and path_scheme.

all_specs()
build_env_path(spec)
build_log_path(spec)
build_packages_path(spec)
check_installed(spec)
create_install_directory(spec)
hidden_file_paths
metadata_path(spec)
read_spec(path)

Read the contents of a file and parse them as a spec

relative_path_for_spec(spec)
spec_file_path(spec)

Gets full path to spec file

specs_by_hash()
write_spec(spec, path)

Write a spec out to a file.

class spack.directory_layout.YamlExtensionsLayout(root, layout)

Bases: spack.directory_layout.ExtensionsLayout

Implements globally activated extensions within a YamlDirectoryLayout.

add_extension(spec, ext_spec)
check_activated(spec, ext_spec)
check_extension_conflict(spec, ext_spec)
extendee_target_directory(extendee)
extension_file_path(spec)

Gets full path to an installed package’s extension file

extension_map(spec)

Defensive copying version of _extension_map() for external API.

remove_extension(spec, ext_spec)
class spack.directory_layout.YamlViewExtensionsLayout(root, layout)

Bases: spack.directory_layout.YamlExtensionsLayout

Governs the directory layout present when creating filesystem views in a certain root folder.

Meant to replace YamlDirectoryLayout when working with filesystem views.

extendee_target_directory(extendee)
extension_file_path(spec)

Gets the full path to an installed package’s extension file.

spack.environment module

class spack.environment.AppendFlagsEnv(name, value, **kwargs)

Bases: spack.environment.NameValueModifier

execute()
class spack.environment.AppendPath(name, value, **kwargs)

Bases: spack.environment.NameValueModifier

execute()
class spack.environment.EnvironmentModifications(other=None)

Bases: object

Keeps track of requests to modify the current environment.

Each call to a method to modify the environment stores the extra information on the caller in the request:

  • ‘filename’ : filename of the module where the caller is defined
  • ‘lineno’: line number where the request occurred
  • ‘context’ : line of code that issued the request that failed
append_flags(name, value, sep=' ', **kwargs)

Stores in the current object a request to append to an env variable

Parameters:
  • name – name of the environment variable to be appended to
  • value – value to append to the environment variable

Appends with spaces separating different additions to the variable

append_path(name, path, **kwargs)

Stores a request to append a path to a path list.

Parameters:
  • name – name of the path list in the environment
  • path – path to be appended
apply_modifications()

Applies the modifications and clears the list.

clear()

Clears the current list of modifications

extend(other)
static from_sourcing_file(filename, *args, **kwargs)

Returns modifications that would be made by sourcing a file.

Parameters:
  • filename (str) – The file to source
  • *args (list of str) – Arguments to pass on the command line
Keyword Arguments:
 
  • shell (str) – The shell to use (default: bash)
  • shell_options (str) – Options passed to the shell (default: -c)
  • source_command (str) – The command to run (default: source)
  • suppress_output (str) – Redirect used to suppress output of command (default: &> /dev/null)
  • concatenate_on_success (str) – Operator used to execute a command only when the previous command succeeds (default: &&)
Returns:

an object that, if executed, has

the same effect on the environment as sourcing the file

Return type:

EnvironmentModifications

group_by_name()

Returns a dict of the modifications grouped by variable name.

Returns:dict mapping the environment variable name to the modifications to be done on it
prepend_path(name, path, **kwargs)

Same as append_path, but the path is pre-pended.

Parameters:
  • name – name of the path list in the environment
  • path – path to be pre-pended
remove_path(name, path, **kwargs)

Stores a request to remove a path from a path list.

Parameters:
  • name – name of the path list in the environment
  • path – path to be removed
set(name, value, **kwargs)

Stores a request to set an environment variable.

Parameters:
  • name – name of the environment variable to be set
  • value – value of the environment variable
set_path(name, elements, **kwargs)

Stores a request to set a path generated from a list.

Parameters:
  • name – name o the environment variable to be set.
  • elements – elements of the path to set.
unset(name, **kwargs)

Stores a request to unset an environment variable.

Parameters:name – name of the environment variable to be set
class spack.environment.NameModifier(name, **kwargs)

Bases: object

update_args(**kwargs)
class spack.environment.NameValueModifier(name, value, **kwargs)

Bases: object

update_args(**kwargs)
class spack.environment.PrependPath(name, value, **kwargs)

Bases: spack.environment.NameValueModifier

execute()
class spack.environment.RemovePath(name, value, **kwargs)

Bases: spack.environment.NameValueModifier

execute()
class spack.environment.SetEnv(name, value, **kwargs)

Bases: spack.environment.NameValueModifier

execute()
class spack.environment.SetPath(name, value, **kwargs)

Bases: spack.environment.NameValueModifier

execute()
class spack.environment.UnsetEnv(name, **kwargs)

Bases: spack.environment.NameModifier

execute()
spack.environment.concatenate_paths(paths, separator=':')

Concatenates an iterable of paths into a string of paths separated by separator, defaulting to colon.

Parameters:
  • paths – iterable of paths
  • separator – the separator to use, default ‘:’
Returns:

string

spack.environment.filter_environment_blacklist(env, variables)

Generator that filters out any change to environment variables present in the input list.

Parameters:
  • env – list of environment modifications
  • variables – list of variable names to be filtered
Returns:

items in env if they are not in variables

spack.environment.inspect_path(root, inspections, exclude=None)

Inspects root to search for the subdirectories in inspections. Adds every path found to a list of prepend-path commands and returns it.

Parameters:
  • root (str) – absolute path where to search for subdirectories
  • inspections (dict) – maps relative paths to a list of environment variables that will be modified if the path exists. The modifications are not performed immediately, but stored in a command object that is returned to client
  • exclude (callable) – optional callable. If present it must accept an absolute path and return True if it should be excluded from the inspection

Examples:

The following lines execute an inspection in /usr to search for /usr/include and /usr/lib64. If found we want to prepend /usr/include to CPATH and /usr/lib64 to MY_LIB64_PATH.

# Set up the dictionary containing the inspection
inspections = {
    'include': ['CPATH'],
    'lib64': ['MY_LIB64_PATH']
}

# Get back the list of command needed to modify the environment
env = inspect_path('/usr', inspections)

# Eventually execute the commands
env.apply_modifications()
Returns:instance of EnvironmentModifications containing the requested modifications
spack.environment.set_or_unset_not_first(variable, changes, errstream)

Check if we are going to set or unset something after other modifications have already been requested.

spack.environment.validate(env, errstream)

Validates the environment modifications to check for the presence of suspicious patterns. Prompts a warning for everything that was found.

Current checks: - set or unset variables after other changes on the same variable

Parameters:env – list of environment modifications

spack.error module

exception spack.error.SpackError(message, long_message=None)

Bases: exceptions.Exception

This is the superclass for all Spack errors. Subclasses can be found in the modules they have to do with.

die()
long_message
print_context()

Print extended debug information about this exception.

This is usually printed when the top-level Spack error handler calls die(), but it acn be called separately beforehand if a lower-level error handler needs to print error context and continue without raising the exception to the top level.

exception spack.error.SpecError(message, long_message=None)

Bases: spack.error.SpackError

Superclass for all errors that occur while constructing specs.

exception spack.error.UnsatisfiableSpecError(provided, required, constraint_type)

Bases: spack.error.SpecError

Raised when a spec conflicts with package constraints. Provide the requirement that was violated when raising.

exception spack.error.UnsupportedPlatformError(message)

Bases: spack.error.SpackError

Raised by packages when a platform is not supported

spack.fetch_strategy module

Fetch strategies are used to download source code into a staging area in order to build it. They need to define the following methods:

  • fetch()
    This should attempt to download/check out source from somewhere.
  • check()
    Apply a checksum to the downloaded source code, e.g. for an archive. May not do anything if the fetch method was safe to begin with.
  • expand()
    Expand (e.g., an archive) downloaded file to source.
  • reset()
    Restore original state of downloaded code. Used by clean commands. This may just remove the expanded source and re-expand an archive, or it may run something like git reset –hard.
  • archive()
    Archive a source directory, e.g. for creating a mirror.
class spack.fetch_strategy.CacheURLFetchStrategy(*args, **kwargs)

Bases: spack.fetch_strategy.URLFetchStrategy

The resource associated with a cache URL may be out of date.

fetch(*args, **kwargs)
exception spack.fetch_strategy.ChecksumError(message, long_message=None)

Bases: spack.fetch_strategy.FetchError

Raised when archive fails to checksum.

class spack.fetch_strategy.FSMeta(name, bases, dict)

Bases: type

This metaclass registers all fetch strategies in a list.

exception spack.fetch_strategy.FailedDownloadError(url, msg='')

Bases: spack.fetch_strategy.FetchError

Raised wen a download fails.

exception spack.fetch_strategy.FetchError(message, long_message=None)

Bases: spack.error.SpackError

Superclass fo fetcher errors.

class spack.fetch_strategy.FetchStrategy

Bases: object

Superclass of all fetch strategies.

archive(destination)

Create an archive of the downloaded data for a mirror.

For downloaded files, this should preserve the checksum of the original file. For repositories, it should just create an expandable tarball out of the downloaded repository.

cachable

Whether fetcher is capable of caching the resource it retrieves.

This generally is determined by whether the resource is identifiably associated with a specific package version.

Returns:True if can cache, False otherwise.
Return type:bool
check()

Checksum the archive fetched by this FetchStrategy.

enabled = False
expand()

Expand the downloaded archive.

fetch()

Fetch source code archive or repo.

Returns:True on success, False on failure.
Return type:bool
classmethod matches(args)
required_attributes = None
reset()

Revert to freshly downloaded state.

For archive files, this may just re-expand the archive.

set_stage(stage)

This is called by Stage before any of the fetching methods are called on the stage.

class spack.fetch_strategy.FsCache(root)

Bases: object

destroy()
fetcher(targetPath, digest, **kwargs)
store(fetcher, relativeDst)
class spack.fetch_strategy.GitFetchStrategy(**kwargs)

Bases: spack.fetch_strategy.VCSFetchStrategy

Fetch strategy that gets source code from a git repository. Use like this in a package:

version(‘name’, git=’https://github.com/project/repo.git’)

Optionally, you can provide a branch, or commit to check out, e.g.:

version(‘1.1’, git=’https://github.com/project/repo.git’, tag=’v1.1’)

You can use these three optional attributes in addition to git:

  • branch: Particular branch to build from (default is master)
  • tag: Particular tag to check out
  • commit: Particular commit hash in the repo
archive(destination)
cachable
enabled = True
fetch()
git
git_version
required_attributes = ('git',)
reset(*args, **kwargs)
class spack.fetch_strategy.GoFetchStrategy(**kwargs)

Bases: spack.fetch_strategy.VCSFetchStrategy

Fetch strategy that employs the go get infrastructure.

Use like this in a package:

version(‘name’,
go=’github.com/monochromegane/the_platinum_searcher/…’)

Go get does not natively support versions, they can be faked with git

archive(destination)
enabled = True
fetch(*args, **kwargs)
go
go_version
required_attributes = ('go',)
reset(*args, **kwargs)
class spack.fetch_strategy.HgFetchStrategy(**kwargs)

Bases: spack.fetch_strategy.VCSFetchStrategy

Fetch strategy that gets source code from a Mercurial repository. Use like this in a package:

version(‘name’, hg=’https://jay.grs.rwth-aachen.de/hg/lwm2’)

Optionally, you can provide a branch, or revision to check out, e.g.:

version(‘torus’,
hg=’https://jay.grs.rwth-aachen.de/hg/lwm2’, branch=’torus’)

You can use the optional ‘revision’ attribute to check out a branch, tag, or particular revision in hg. To prevent non-reproducible builds, using a moving target like a branch is discouraged.

  • revision: Particular revision, branch, or tag.
archive(destination)
cachable
enabled = True
fetch(*args, **kwargs)
hg

returns – The hg executable :rtype: Executable

required_attributes = ['hg']
reset(*args, **kwargs)
exception spack.fetch_strategy.InvalidArgsError(pkg, version)

Bases: spack.fetch_strategy.FetchError

exception spack.fetch_strategy.NoArchiveFileError(message, long_message=None)

Bases: spack.fetch_strategy.FetchError

“Raised when an archive file is expected but none exists.

exception spack.fetch_strategy.NoCacheError(message, long_message=None)

Bases: spack.fetch_strategy.FetchError

Raised when there is no cached archive for a package.

exception spack.fetch_strategy.NoDigestError(message, long_message=None)

Bases: spack.fetch_strategy.FetchError

Raised after attempt to checksum when URL has no digest.

exception spack.fetch_strategy.NoStageError(method)

Bases: spack.fetch_strategy.FetchError

Raised when fetch operations are called before set_stage().

class spack.fetch_strategy.SvnFetchStrategy(**kwargs)

Bases: spack.fetch_strategy.VCSFetchStrategy

Fetch strategy that gets source code from a subversion repository. Use like this in a package:

version(‘name’, svn=’http://www.example.com/svn/trunk’)

Optionally, you can provide a revision for the URL:

version(‘name’, svn=’http://www.example.com/svn/trunk’,
revision=‘1641’)
archive(destination)
cachable
enabled = True
fetch(*args, **kwargs)
required_attributes = ['svn']
reset(*args, **kwargs)
svn
class spack.fetch_strategy.URLFetchStrategy(url=None, digest=None, **kwargs)

Bases: spack.fetch_strategy.FetchStrategy

FetchStrategy that pulls source code from a URL for an archive, checks the archive against a checksum,and decompresses the archive.

archive(destination)

Just moves this archive to the destination.

archive_file

Path to the source archive within this stage directory.

cachable
check(*args, **kwargs)

Check the downloaded archive against a checksum digest. No-op if this stage checks code out of a repository.

curl
enabled = True
expand(*args, **kwargs)
fetch(*args, **kwargs)
required_attributes = ['url']
reset(*args, **kwargs)

Removes the source path if it exists, then re-expands the archive.

class spack.fetch_strategy.VCSFetchStrategy(name, *rev_types, **kwargs)

Bases: spack.fetch_strategy.FetchStrategy

archive(*args, **kwargs)
check(*args, **kwargs)
expand(*args, **kwargs)
spack.fetch_strategy.all_strategies = [<class 'spack.fetch_strategy.URLFetchStrategy'>, <class 'spack.fetch_strategy.CacheURLFetchStrategy'>, <class 'spack.fetch_strategy.GoFetchStrategy'>, <class 'spack.fetch_strategy.GitFetchStrategy'>, <class 'spack.fetch_strategy.SvnFetchStrategy'>, <class 'spack.fetch_strategy.HgFetchStrategy'>]

List of all fetch strategies, created by FetchStrategy metaclass.

spack.fetch_strategy.args_are_for(args, fetcher)
spack.fetch_strategy.for_package_version(pkg, version)

Determine a fetch strategy based on the arguments supplied to version() in the package description.

spack.fetch_strategy.from_kwargs(**kwargs)

Construct an appropriate FetchStrategy from the given keyword arguments.

Parameters:**kwargs – dictionary of keyword arguments, e.g. from a version() directive in a package.
Returns:
The fetch strategy that matches the args, based
on attribute names (e.g., git, hg, etc.)
Return type:fetch_strategy
Raises:FetchError – If no fetch_strategy matches the args.
spack.fetch_strategy.from_list_url(pkg)

If a package provides a URL which lists URLs for resources by version, this can can create a fetcher for a URL discovered for the specified package’s version.

spack.fetch_strategy.from_url(url)

Given a URL, find an appropriate fetch strategy for it. Currently just gives you a URLFetchStrategy that uses curl.

TODO: make this return appropriate fetch strategies for other
types of URLs.

spack.file_cache module

exception spack.file_cache.CacheError(message, long_message=None)

Bases: spack.error.SpackError

class spack.file_cache.FileCache(root)

Bases: object

This class manages cached data in the filesystem.

  • Cache files are fetched and stored by unique keys. Keys can be relative paths, so that there can be some hierarchy in the cache.
  • The FileCache handles locking cache files for reading and writing, so client code need not manage locks for cache entries.
cache_path(key)

Path to the file in the cache for a particular key.

destroy()

Remove all files under the cache root.

init_entry(key)

Ensure we can access a cache file. Create a lock for it if needed.

Return whether the cache file exists yet or not.

mtime(key)

Return modification time of cache file, or 0 if it does not exist.

Time is in units returned by os.stat in the mtime field, which is platform-dependent.

read_transaction(key)

Get a read transaction on a file cache item.

Returns a ReadTransaction context manager and opens the cache file for reading. You can use it like this:

with file_cache_object.read_transaction(key) as cache_file:
cache_file.read()
remove(key)
write_transaction(key)

Get a write transaction on a file cache item.

Returns a WriteTransaction context manager that opens a temporary file for writing. Once the context manager finishes, if nothing went wrong, moves the file into place on top of the old file atomically.

spack.filesystem_view module

class spack.filesystem_view.FilesystemView(root, layout, **kwargs)

Bases: object

Governs a filesystem view that is located at certain root-directory.

Packages are linked from their install directories into a common file hierachy.

In distributed filesystems, loading each installed package seperately can lead to slow-downs due to too many directories being traversed. This can be circumvented by loading all needed modules into a common directory structure.

add_extension(spec)

Add (link) an extension in this view.

add_specs(*specs, **kwargs)

Add given specs to view.

The supplied specs might be standalone packages or extensions of other packages.

Should accept with_dependencies as keyword argument (default True) to indicate wether or not dependencies should be activated as well.

Should except an exclude keyword argument containing a list of regexps that filter out matching spec names.

This method should make use of activate_{extension,standalone}.

add_standalone(spec)

Add (link) a standalone package into this view.

check_added(spec)

Check if the given concrete spec is active in this view.

get_all_specs()

Get all specs currently active in this view.

get_spec(spec)

Return the actual spec linked in this view (i.e. do not look it up in the database by name).

spec can be a name or a spec from which the name is extracted.

As there can only be a single version active for any spec the name is enough to identify the spec in the view.

If no spec is present, returns None.

print_status(*specs, **kwargs)
Print a short summary about the given specs, detailing whether..
  • ..they are active in the view.
  • ..they are active but the activated version differs.
  • ..they are not activte in the view.

Takes with_dependencies keyword argument so that the status of dependencies is printed as well.

remove_extension(spec)

Remove (unlink) an extension from this view.

remove_specs(*specs, **kwargs)

Removes given specs from view.

The supplied spec might be a standalone package or an extension of another package.

Should accept with_dependencies as keyword argument (default True) to indicate wether or not dependencies should be deactivated as well.

Should accept with_dependents as keyword argument (default True) to indicate wether or not dependents on the deactivated specs should be removed as well.

Should except an exclude keyword argument containing a list of regexps that filter out matching spec names.

This method should make use of deactivate_{extension,standalone}.

remove_standalone(spec)

Remove (unlink) a standalone package from this view.

class spack.filesystem_view.YamlFilesystemView(root, layout, **kwargs)

Bases: spack.filesystem_view.FilesystemView

Filesystem view to work with a yaml based directory layout.

add_extension(spec)
add_specs(*specs, **kwargs)
add_standalone(spec)
check_added(spec)
get_all_specs()
get_conflicts(*specs)

Return list of tuples (<spec>, <spec in view>) where the spec active in the view differs from the one to be activated.

get_path_meta_folder(spec)

Get path to meta folder for either spec or spec name.

get_spec(spec)
print_conflict(spec_active, spec_specified, level='error')

Singular print function for spec conflicts.

print_status(*specs, **kwargs)
purge_empty_directories()

Ascend up from the leaves accessible from path and remove empty directories.

remove_extension(spec, with_dependents=True)

Remove (unlink) an extension from this view.

remove_specs(*specs, **kwargs)
remove_standalone(spec)

Remove (unlink) a standalone package from this view.

spack.graph module

Functions for graphing DAGs of dependencies.

This file contains code for graphing DAGs of software packages (i.e. Spack specs). There are two main functions you probably care about:

graph_ascii() will output a colored graph of a spec in ascii format, kind of like the graph git shows with “git log –graph”, e.g.:

o  mpileaks
|    | |    | o |  callpath
|/| |
| |\|
| |\     | | |\     | | | | o  adept-utils
| |_|_|/|
|/| | | |
o | | | |  mpi
 / / / /
| | o |  dyninst
| |/| |
|/|/| |
| | |/
| o |  libdwarf
|/ /
o |  libelf
 /
o  boost

graph_dot() will output a graph of a spec (or multiple specs) in dot format.

Note that graph_ascii assumes a single spec while graph_dot can take a number of specs as input.

spack.graph.topological_sort(spec, reverse=False, deptype='all')

Topological sort for specs.

Return a list of dependency specs sorted topologically. The spec argument is not modified in the process.

spack.graph.graph_ascii(spec, node='o', out=None, debug=False, indent=0, color=None, deptype='all')
class spack.graph.AsciiGraph

Bases: object

write(spec, color=None, out=None)

Write out an ascii graph of the provided spec.

Arguments: spec – spec to graph. This only handles one spec at a time.

Optional arguments:

out – file object to write out to (default is sys.stdout)

color – whether to write in color. Default is to autodetect
based on output file.
spack.graph.graph_dot(specs, deptype='all', static=False, out=None)

Generate a graph in dot format of all provided specs.

Print out a dot formatted graph of all the dependencies between package. Output can be passed to graphviz, e.g.:

spack graph –dot qt | dot -Tpdf > spack-graph.pdf

spack.main module

This is the implementation of the Spack command line executable.

In a normal Spack installation, this is invoked from the bin/spack script after the system path is set up.

class spack.main.SpackArgumentParser(prog=None, usage=None, description=None, epilog=None, version=None, parents=[], formatter_class=<class 'argparse.HelpFormatter'>, prefix_chars='-', fromfile_prefix_chars=None, argument_default=None, conflict_handler='error', add_help=True)

Bases: argparse.ArgumentParser

add_command(name)

Add one subcommand to this parser.

format_help(level='short')
format_help_sections(level)

Format help on sections for a particular verbosity level.

Parameters:level (str) – ‘short’ or ‘long’ (more commands shown for long)
class spack.main.SpackCommand(command)

Bases: object

Callable object that invokes a spack command (for testing).

Example usage:

install = SpackCommand('install')
install('-v', 'mpich')

Use this to invoke Spack commands directly from Python and check their output.

exception spack.main.SpackCommandError

Bases: exceptions.Exception

Raised when SpackCommand execution fails.

spack.main.add_all_commands(parser)

Add all spack subcommands to the parser.

spack.main.allows_unknown_args(command)

Implements really simple argument injection for unknown arguments.

Commands may add an optional argument called “unknown args” to indicate they can handle unknonwn args, and we’ll pass the unknown args in.

spack.main.index_commands()

create an index of commands by section for this help level

spack.main.main(argv=None)

This is the entry point for the Spack command.

Parameters:argv (list of str or None) – command line arguments, NOT including the executable name. If None, parses from sys.argv.
spack.main.make_argument_parser()

Create an basic argument parser without any subcommands added.

spack.main.set_working_dir()

Change the working directory to getcwd, or spack prefix if no cwd.

spack.main.setup_main_options(args)

Configure spack globals based on the basic options.

spack.mirror module

This file contains code for creating spack mirror directories. A mirror is an organized hierarchy containing specially named archive files. This enabled spack to know where to find files in a mirror if the main server for a particular package is down. Or, if the computer where spack is run is not connected to the internet, it allows spack to download packages directly from a mirror (e.g., on an intranet).

exception spack.mirror.MirrorError(msg, long_msg=None)

Bases: spack.error.SpackError

Superclass of all mirror-creation related errors.

spack.mirror.add_single_spec(spec, mirror_root, categories, **kwargs)
spack.mirror.create(path, specs, **kwargs)

Create a directory to be used as a spack mirror, and fill it with package archives.

Parameters:
  • path – Path to create a mirror directory hierarchy in.
  • specs – Any package versions matching these specs will be added to the mirror.
Keyword Arguments:
 
  • no_checksum – If True, do not checkpoint when fetching (default False)
  • num_versions – Max number of versions to fetch per spec, if spec is ambiguous (default is 0 for all of them)
Return Value:

Returns a tuple of lists: (present, mirrored, error)

  • present: Package specs that were already present.
  • mirrored: Package specs that were successfully mirrored.
  • error: Package specs that failed to mirror due to some error.

This routine iterates through all known package versions, and it creates specs for those versions. If the version satisfies any spec in the specs list, it is downloaded and added to the mirror.

spack.mirror.get_matching_versions(specs, **kwargs)

Get a spec for EACH known version matching any spec in the list.

spack.mirror.mirror_archive_filename(spec, fetcher, resourceId=None)

Get the name of the spec’s archive in the mirror.

spack.mirror.mirror_archive_path(spec, fetcher, resourceId=None)

Get the relative path to the spec’s archive within a mirror.

spack.mirror.suggest_archive_basename(resource)

Return a tentative basename for an archive.

Raises:RuntimeError – if the name is not an allowed archive type.

spack.multimethod module

This module contains utilities for using multi-methods in spack. You can think of multi-methods like overloaded methods – they’re methods with the same name, and we need to select a version of the method based on some criteria. e.g., for overloaded methods, you would select a version of the method to call based on the types of its arguments.

In spack, multi-methods are used to ease the life of package authors. They allow methods like install() (or other methods called by install()) to declare multiple versions to be called when the package is instantiated with different specs. e.g., if the package is built with OpenMPI on x86_64,, you might want to call a different install method than if it was built for mpich2 on BlueGene/Q. Likewise, you might want to do a different type of install for different versions of the package.

Multi-methods provide a simple decorator-based syntax for this that avoids overly complicated rat nests of if statements. Obviously, depending on the scenario, regular old conditionals might be clearer, so package authors should use their judgement.

exception spack.multimethod.MultiMethodError(message)

Bases: spack.error.SpackError

Superclass for multimethod dispatch errors

exception spack.multimethod.NoSuchMethodError(cls, method_name, spec, possible_specs)

Bases: spack.error.SpackError

Raised when we can’t find a version of a multi-method.

class spack.multimethod.SpecMultiMethod(default=None)

Bases: object

This implements a multi-method for Spack specs. Packages are instantiated with a particular spec, and you may want to execute different versions of methods based on what the spec looks like. For example, you might want to call a different version of install() for one platform than you call on another.

The SpecMultiMethod class implements a callable object that handles method dispatch. When it is called, it looks through registered methods and their associated specs, and it tries to find one that matches the package’s spec. If it finds one (and only one), it will call that method.

The package author is responsible for ensuring that only one condition on multi-methods ever evaluates to true. If multiple methods evaluate to true, this will raise an exception.

This is intended for use with decorators (see below). The decorator (see docs below) creates SpecMultiMethods and registers method versions with them.

To register a method, you can do something like this:
mm = SpecMultiMethod() mm.register(“^chaos_5_x86_64_ib”, some_method)

The object registered needs to be a Spec or some string that will parse to be a valid spec.

When the mm is actually called, it selects a version of the method to call based on the sys_type of the object it is called on.

See the docs for decorators below for more details.

register(spec, method)

Register a version of a method for a particular sys_type.

class spack.multimethod.when(spec)

Bases: object

This annotation lets packages declare multiple versions of methods like install() that depend on the package’s spec. For example:

class SomePackage(Package):
    ...

    def install(self, prefix):
        # Do default install

    @when('arch=chaos_5_x86_64_ib')
    def install(self, prefix):
        # This will be executed instead of the default install if
        # the package's platform() is chaos_5_x86_64_ib.

    @when('arch=bgqos_0")
    def install(self, prefix):
        # This will be executed if the package's sys_type is bgqos_0

This allows each package to have a default version of install() AND specialized versions for particular platforms. The version that is called depends on the architecutre of the instantiated package.

Note that this works for methods other than install, as well. So, if you only have part of the install that is platform specific, you could do this:

class SomePackage(Package):
    ...
    # virtual dependence on MPI.
    # could resolve to mpich, mpich2, OpenMPI
    depends_on('mpi')

    def setup(self):
        # do nothing in the default case
        pass

    @when('^openmpi')
    def setup(self):
        # do something special when this is built with OpenMPI for
        # its MPI implementations.


    def install(self, prefix):
        # Do common install stuff
        self.setup()
        # Do more common install stuff

There must be one (and only one) @when clause that matches the package’s spec. If there is more than one, or if none match, then the method will raise an exception when it’s called.

Note that the default version of decorated methods must always come first. Otherwise it will override all of the platform-specific versions. There’s not much we can do to get around this because of the way decorators work.

spack.package module

This is where most of the action happens in Spack. See the Package docs for detailed instructions on how the class works and on how to write your own packages.

The spack package structure is based strongly on Homebrew (http://wiki.github.com/mxcl/homebrew/), mainly because Homebrew makes it very easy to create packages. For a complete rundown on spack and how it differs from homebrew, look at the README.

exception spack.package.ActivationError(msg, long_msg=None)

Bases: spack.package.ExtensionError

exception spack.package.DependencyConflictError(conflict)

Bases: spack.error.SpackError

Raised when the dependencies cannot be flattened as asked for.

exception spack.package.ExtensionConflictError(path)

Bases: spack.package.ExtensionError

exception spack.package.ExtensionError(message, long_msg=None)

Bases: spack.package.PackageError

exception spack.package.ExternalPackageError(message, long_msg=None)

Bases: spack.package.InstallError

Raised by install() when a package is only for external use.

exception spack.package.FetchError(message, long_msg=None)

Bases: spack.error.SpackError

Raised when something goes wrong during fetch.

exception spack.package.InstallError(message, long_msg=None)

Bases: spack.error.SpackError

Raised when something goes wrong during install or uninstall.

class spack.package.InstallPhase(name)

Bases: object

Manages a single phase of the installation.

This descriptor stores at creation time the name of the method it should search for execution. The method is retrieved at __get__ time, so that it can be overridden by subclasses of whatever class declared the phases.

It also provides hooks to execute arbitrary callbacks before and after the phase.

copy()
exception spack.package.NoURLError(cls)

Bases: spack.package.PackageError

Raised when someone tries to build a URL for a package with no URLs.

class spack.package.Package(spec)

Bases: spack.package.PackageBase

General purpose class with a single install phase that needs to be coded by packagers.

build_system_class = 'Package'

This attribute is used in UI queries that require to know which build-system class we are using

phases = ['install']

The one and only phase

class spack.package.PackageBase(spec)

Bases: object

This is the superclass for all spack packages.

*The Package class*

Package is where the bulk of the work of installing packages is done.

A package defines how to fetch, verfiy (via, e.g., md5), build, and install a piece of software. A Package also defines what other packages it depends on, so that dependencies can be installed along with the package itself. Packages are written in pure python.

Packages are all submodules of spack.packages. If spack is installed in $prefix, all of its python files are in $prefix/lib/spack. Most of them are in the spack module, so all the packages live in $prefix/lib/spack/spack/packages.

All you have to do to create a package is make a new subclass of Package in this directory. Spack automatically scans the python files there and figures out which one to import when you invoke it.

An example package

Let’s look at the cmake package to start with. This package lives in $prefix/var/spack/repos/builtin/packages/cmake/package.py:

from spack import *
class Cmake(Package):
    homepage  = 'https://www.cmake.org'
    url       = 'http://www.cmake.org/files/v2.8/cmake-2.8.10.2.tar.gz'
    md5       = '097278785da7182ec0aea8769d06860c'

    def install(self, spec, prefix):
        configure('--prefix=%s'   % prefix,
                  '--parallel=%s' % make_jobs)
        make()
        make('install')

Naming conventions

There are two names you should care about:

  1. The module name, cmake.

    • User will refers to this name, e.g. ‘spack install cmake’.
    • It can include _, -, and numbers (it can even start with a number).
  2. The class name, “Cmake”. This is formed by converting - or _ in the module name to camel case. If the name starts with a number, we prefix the class name with _. Examples:

    Module Name Class Name
    foo_bar FooBar
    docbook-xml DocbookXml
    FooBar Foobar
    3proxy _3proxy

    The class name is what spack looks for when it loads a package module.

Required Attributes

Aside from proper naming, here is the bare minimum set of things you need when you make a package:

homepage:
informational URL, so that users know what they’re installing.
url or url_for_version(self, version):
If url, then the URL of the source archive that spack will fetch. If url_for_version(), then a method returning the URL required to fetch a particular version.
install():
This function tells spack how to build and install the software it downloaded.

Optional Attributes

You can also optionally add these attributes, if needed:

list_url:
Webpage to scrape for available version strings. Default is the directory containing the tarball; use this if the default isn’t correct so that invoking ‘spack versions’ will work for this package.
url_version(self, version):
When spack downloads packages at particular versions, it just converts version to string with str(version). Override this if your package needs special version formatting in its URL. boost is an example of a package that needs this.

*Creating Packages*

As a package creator, you can probably ignore most of the preceding information, because you can use the ‘spack create’ command to do it all automatically.

You as the package creator generally only have to worry about writing your install function and specifying dependencies.

spack create

Most software comes in nicely packaged tarballs, like this one

http://www.cmake.org/files/v2.8/cmake-2.8.10.2.tar.gz

Taking a page from homebrew, spack deduces pretty much everything it needs to know from the URL above. If you simply type this:

spack create http://www.cmake.org/files/v2.8/cmake-2.8.10.2.tar.gz

Spack will download the tarball, generate an md5 hash, figure out the version and the name of the package from the URL, and create a new package file for you with all the names and attributes set correctly.

Once this skeleton code is generated, spack pops up the new package in your $EDITOR so that you can modify the parts that need changes.

Dependencies

If your package requires another in order to build, you can specify that like this:

class Stackwalker(Package):
    ...
    depends_on("libdwarf")
    ...

This tells spack that before it builds stackwalker, it needs to build the libdwarf package as well. Note that this is the module name, not the class name (The class name is really only used by spack to find your package).

Spack will download an install each dependency before it installs your package. In addtion, it will add -L, -I, and rpath arguments to your compiler and linker for each dependency. In most cases, this allows you to avoid specifying any dependencies in your configure or cmake line; you can just run configure or cmake without any additional arguments and it will find the dependencies automatically.

The Install Function

The install function is designed so that someone not too terribly familiar with Python could write a package installer. For example, we put a number of commands in install scope that you can use almost like shell commands. These include make, configure, cmake, rm, rmtree, mkdir, mkdirp, and others.

You can see above in the cmake script that these commands are used to run configure and make almost like they’re used on the command line. The only difference is that they are python function calls and not shell commands.

It may be puzzling to you where the commands and functions in install live. They are NOT instance variables on the class; this would require us to type ‘self.’ all the time and it makes the install code unnecessarily long. Rather, spack puts these commands and variables in module scope for your Package subclass. Since each package has its own module, this doesn’t pollute other namespaces, and it allows you to more easily implement an install function.

For a full list of commands and variables available in module scope, see the add_commands_to_module() function in this class. This is where most of them are created and set on the module.

Parallel Builds

By default, Spack will run make in parallel when you run make() in your install function. Spack figures out how many cores are available on your system and runs make with -j<cores>. If you do not want this behavior, you can explicitly mark a package not to use parallel make:

class SomePackage(Package):
    ...
    parallel = False
    ...

This changes the default behavior so that make is sequential. If you still want to build some parts in parallel, you can do this in your install function:

make(parallel=True)

Likewise, if you do not supply parallel = True in your Package, you can keep the default parallel behavior and run make like this when you want a sequential build:

make(parallel=False)

Package Lifecycle

This section is really only for developers of new spack commands.

A package’s lifecycle over a run of Spack looks something like this:

p = Package()             # Done for you by spack

p.do_fetch()              # downloads tarball from a URL
p.do_stage()              # expands tarball in a temp directory
p.do_patch()              # applies patches to expanded source
p.do_install()            # calls package's install() function
p.do_uninstall()          # removes install directory

There are also some other commands that clean the build area:

p.do_clean()              # removes the stage directory entirely
p.do_restage()            # removes the build directory and
                          # re-expands the archive.

The convention used here is that a do_* function is intended to be called internally by Spack commands (in spack.cmd). These aren’t for package writers to override, and doing so may break the functionality of the Package class.

Package creators override functions like install() (all of them do this), clean() (some of them do this), and others to provide custom behavior.

activate(extension, **kwargs)

Make extension package usable by linking all its files to a target provided by the directory layout (depending if the user wants to activate globally or in a specified file system view).

Package authors can override this method to support other extension mechanisms. Spack internals (commands, hooks, etc.) should call do_activate() method so that proper checks are always executed.

all_urls
architecture

Get the spack.architecture.Arch object that represents the environment in which this package will be built.

build_log_path
build_time_test_callbacks = None
check_for_unfinished_installation(keep_prefix=False, restage=False)

Check for leftover files from partially-completed prior install to prepare for a new install attempt. Options control whether these files are reused (vs. destroyed). This function considers a package fully-installed if there is a DB entry for it (in that way, it is more strict than Package.installed). The return value is used to indicate when the prefix exists but the install is not complete.

compiler

Get the spack.compiler.Compiler object used to build this package

deactivate(extension, **kwargs)

Unlinks all files from extension out of this package’s install dir or the corresponding filesystem view.

Package authors can override this method to support other extension mechanisms. Spack internals (commands, hooks, etc.) should call do_deactivate() method so that proper checks are always executed.

dependencies_of_type(*deptypes)

Get dependencies that can possibly have these deptypes.

This analyzes the package and determines which dependencies can be a certain kind of dependency. Note that they may not always be this kind of dependency, since dependencies can be optional, so something may be a build dependency in one configuration and a run dependency in another.

dependency_activations()
do_activate(force=False, verbose=True, extensions_layout=None)

Called on an extension to invoke the extendee’s activate method.

Commands should call this routine, and should not call activate() directly.

do_clean()

Removes the package’s build stage and source tarball.

do_deactivate(**kwargs)

Called on the extension to invoke extendee’s deactivate() method.

remove_dependents=True deactivates extensions depending on this package instead of raising an error.

do_fake_install()

Make a fake install directory containing fake executables, headers, and libraries.

do_fetch(mirror_only=False)

Creates a stage directory and downloads the tarball for this package. Working directory will be set to the stage directory.

do_install(keep_prefix=False, keep_stage=False, install_source=False, install_deps=True, skip_patch=False, verbose=False, make_jobs=None, fake=False, explicit=False, dirty=None, **kwargs)

Called by commands to install a package and its dependencies.

Package implementations should override install() to describe their build process.

Parameters:
  • keep_prefix (bool) – Keep install prefix on failure. By default, destroys it.
  • keep_stage (bool) – By default, stage is destroyed only if there are no exceptions during build. Set to True to keep the stage even with exceptions.
  • install_source (bool) – By default, source is not installed, but for debugging it might be useful to keep it around.
  • install_deps (bool) – Install dependencies before installing this package
  • skip_patch (bool) – Skip patch stage of build if True.
  • verbose (bool) – Display verbose build output (by default, suppresses it)
  • make_jobs (int) – Number of make jobs to use for install. Default is ncpus
  • fake (bool) – Don’t really build; install fake stub files instead.
  • explicit (bool) – True if package was explicitly installed, False if package was implicitly installed (as a dependency).
  • dirty (bool) – Don’t clean the build environment before installing.
  • force (bool) – Install again, even if already installed.
do_patch()

Applies patches if they haven’t been applied already.

do_restage()

Reverts expanded/checked out source to a pristine state.

do_stage(mirror_only=False)

Unpacks and expands the fetched tarball.

do_uninstall(force=False)

Uninstall this package by spec.

env_path
extendable = False

Most packages are NOT extendable. Set to True if you want extensions.

extendee_args

Spec of the extendee of this package, or None if it is not an extension

extendee_spec

Spec of the extendee of this package, or None if it is not an extension

extends(spec)

Returns True if this package extends the given spec.

If self.spec is concrete, this returns whether this package extends the given spec.

If self.spec is not concrete, this returns whether this package may extend the given spec.

fetch_remote_versions()

Try to find remote versions of this package using the list_url and any other URLs described in the package file.

fetcher
format_doc(**kwargs)

Wrap doc string at 72 characters and format nicely

global_license_dir

Returns the directory where global license files for all packages are stored.

global_license_file

Returns the path where a global license file for this particular package should be stored.

install_time_test_callbacks = None
installed
is_activated(extensions_layout=None)

Return True if package is activated.

is_extension
license_comment = '#'

String. Contains the symbol used by the license manager to denote a comment. Defaults to #.

license_files = []

List of strings. These are files that the software searches for when looking for a license. All file paths must be relative to the installation directory. More complex packages like Intel may require multiple licenses for individual components. Defaults to the empty list.

license_required = False

Boolean. If set to True, this software requires a license. If set to False, all of the license_* attributes will be ignored. Defaults to False.

license_url = ''

String. A URL pointing to license setup instructions for the software. Defaults to the empty string.

license_vars = []

List of strings. Environment variables that can be set to tell the software where to look for a license if it is not in the usual location. Defaults to the empty list.

log()
log_path
classmethod lookup_patch(sha256)

Look up a patch associated with this package by its sha256 sum.

Parameters:sha256 (str) – sha256 sum of the patch to look up
Returns:
Patch object with the given hash, or None if
not found.
Return type:(Patch)

To do the lookup, we build an index lazily. This allows us to avoid computing a sha256 for every patch and on every package load. With lazy hashing, we only compute hashes on lookup, which usually happens at build time.

maintainers = []

List of strings which contains GitHub usernames of package maintainers. Do not include @ here in order not to unnecessarily ping the users.

make_jobs = 4

# jobs to use for parallel make. If set, overrides default of ncpus.

module

Use this to add variables to the class’s module’s scope. This lets us use custom syntax in the install method.

namespace
nearest_url(version)

Finds the URL for the next lowest version with a URL. If there is no lower version with a URL, uses the package url property. If that isn’t there, uses a higher URL, and if that isn’t there raises an error.

package_dir

Return the directory where the package.py file lives.

parallel = True

By default we build in parallel. Subclasses can override this.

possible_dependencies(transitive=True, visited=None)

Return set of possible transitive dependencies of this package.

Parameters:transitive (bool) – include all transitive dependencies if True, only direct dependencies if False.
prefix

Get the prefix into which this package should be installed.

provides(vpkg_name)

True if this package provides a virtual package with the specified name

remove_prefix()

Removes the prefix for a package along with any empty parent directories

rpath

Get the rpath this package links with, as a list of paths.

rpath_args

Get the rpath args as a string, with -Wl,-rpath, for each element

run_tests = False

By default do not run tests within package’s install()

sanity_check_is_dir = []

List of prefix-relative directory paths (or a single path). If these do not exist after install, or if they exist but are not directories, sanity checks will fail.

sanity_check_is_file = []

List of prefix-relative file paths (or a single path). If these do not exist after install, or if they exist but are not files, sanity checks fail.

sanity_check_prefix()

This function checks whether install succeeded.

setup_dependent_environment(spack_env, run_env, dependent_spec)

Set up the environment of packages that depend on this one.

This is similar to setup_environment, but it is used to modify the compile and runtime environments of packages that depend on this one. This gives packages like Python and others that follow the extension model a way to implement common environment or compile-time settings for dependencies.

This is useful if there are some common steps to installing all extensions for a certain package.

Example:

  1. Installing python modules generally requires PYTHONPATH to point to the lib/pythonX.Y/site-packages directory in the module’s install prefix. This method could be used to set that variable.
Parameters:
  • spack_env (EnvironmentModifications) – List of environment modifications to be applied when the dependent package is built within Spack.
  • run_env (EnvironmentModifications) – List of environment modifications to be applied when the dependent package is run outside of Spack. These are added to the resulting module file.
  • dependent_spec (Spec) – The spec of the dependent package about to be built. This allows the extendee (self) to query the dependent’s state. Note that this package’s spec is available as self.spec.
setup_dependent_package(module, dependent_spec)

Set up Python module-scope variables for dependent packages.

Called before the install() method of dependents.

Default implementation does nothing, but this can be overridden by an extendable package to set up the module of its extensions. This is useful if there are some common steps to installing all extensions for a certain package.

Examples:

  1. Extensions often need to invoke the python interpreter from the Python installation being extended. This routine can put a python() Executable object in the module scope for the extension package to simplify extension installs.
  2. MPI compilers could set some variables in the dependent’s scope that point to mpicc, mpicxx, etc., allowing them to be called by common name regardless of which MPI is used.
  3. BLAS/LAPACK implementations can set some variables indicating the path to their libraries, since these paths differ by BLAS/LAPACK implementation.
Parameters:
  • module (spack.package.PackageBase.module) – The Python module object of the dependent package. Packages can use this to set module-scope variables for the dependent to use.
  • dependent_spec (Spec) – The spec of the dependent package about to be built. This allows the extendee (self) to query the dependent’s state. Note that this package’s spec is available as self.spec.
setup_environment(spack_env, run_env)

Set up the compile and runtime environments for a package.

spack_env and run_env are EnvironmentModifications objects. Package authors can call methods on them to alter the environment within Spack and at runtime.

Both spack_env and run_env are applied within the build process, before this package’s install() method is called.

Modifications in run_env will also be added to the generated environment modules for this package.

Default implementation does nothing, but this can be overridden if the package needs a particular environment.

Example:

  1. Qt extensions need QTDIR set.
Parameters:
  • spack_env (EnvironmentModifications) – List of environment modifications to be applied when this package is built within Spack.
  • run_env (EnvironmentModifications) – List of environment modifications to be applied when this package is run outside of Spack. These are added to the resulting module file.
stage

Get the build staging area for this package.

This automatically instantiates a Stage object if the package doesn’t have one yet, but it does not create the Stage directory on the filesystem.

transitive_rpaths = True

When True, add RPATHs for the entire DAG. When False, add RPATHs only for immediate dependencies.

try_install_from_binary_cache(explicit)
static uninstall_by_spec(spec, force=False)
url_for_version(version)

Returns a URL from which the specified version of this package may be downloaded.

version: class Version
The version for which a URL is sought.

See Class Version (version.py)

url_version(version)

Given a version, this returns a string that should be substituted into the package’s URL to download that version.

By default, this just returns the version string. Subclasses may need to override this, e.g. for boost versions where you need to ensure that there are _’s in the download URL.

use_xcode = False

By default do not setup mockup XCode on macOS with Clang

version
version_urls = <functools.partial object>
exception spack.package.PackageError(message, long_msg=None)

Bases: spack.error.SpackError

Raised when something is wrong with a package definition.

class spack.package.PackageMeta(name, bases, attr_dict)

Bases: spack.directives.DirectiveMetaMixin

Conveniently transforms attributes to permit extensible phases

Iterates over the attribute ‘phases’ and creates / updates private InstallPhase attributes in the class that is being initialized

phase_fmt = '_InstallPhase_{0}'
static register_callback(check_type, *phases)
exception spack.package.PackageStillNeededError(spec, dependents)

Bases: spack.package.InstallError

Raised when package is still needed by another on uninstall.

exception spack.package.PackageVersionError(version)

Bases: spack.package.PackageError

Raised when a version URL cannot automatically be determined.

spack.package.dump_packages(spec, path)

Dump all package information for a spec and its dependencies.

This creates a package repository within path for every namespace in the spec DAG, and fills the repos wtih package files and patch files for every node in the DAG.

spack.package.flatten_dependencies(spec, flat_dir)

Make each dependency of spec present in dir via symlink.

Execute a dummy install and flatten dependencies

spack.package.on_package_attributes(**attr_dict)

Decorator: executes instance function only if object has attr valuses.

Executes the decorated method only if at the moment of calling the instance has attributes that are equal to certain values.

Parameters:attr_dict (dict) – dictionary mapping attribute names to their required values
spack.package.print_pkg(message)

Outputs a message with a package icon.

spack.package.run_after(*phases)

Registers a method of a package to be run after a given phase

spack.package.run_before(*phases)

Registers a method of a package to be run before a given phase

spack.package.use_cray_compiler_names()

Compiler names for builds that rely on cray compiler names.

spack.package_prefs module

class spack.package_prefs.PackagePrefs(pkgname, component, vpkg=None)

Bases: object

Defines the sort order for a set of specs.

Spack’s package preference implementation uses PackagePrefss to define sort order. The PackagePrefs class looks at Spack’s packages.yaml configuration and, when called on a spec, returns a key that can be used to sort that spec in order of the user’s preferences.

You can use it like this:

# key function sorts CompilerSpecs for mpich in order of preference kf = PackagePrefs(‘mpich’, ‘compiler’) compiler_list.sort(key=kf)

Or like this:

# key function to sort VersionLists for OpenMPI in order of preference. kf = PackagePrefs(‘openmpi’, ‘version’) version_list.sort(key=kf)

Optionally, you can sort in order of preferred virtual dependency providers. To do that, provide ‘providers’ and a third argument denoting the virtual package (e.g., mpi):

kf = PackagePrefs(‘trilinos’, ‘providers’, ‘mpi’) provider_spec_list.sort(key=kf)
classmethod clear_caches()
classmethod has_preferred_providers(pkgname, vpkg)

Whether specific package has a preferred vpkg providers.

classmethod preferred_variants(pkg_name)

Return a VariantMap of preferred variants/values for a spec.

class spack.package_prefs.PackageTesting

Bases: object

check(package_name)
clear()
test(package_name)
test_all()
exception spack.package_prefs.VirtualInPackagesYAMLError(message, long_message=None)

Bases: spack.error.SpackError

Raised when a disallowed virtual is found in packages.yaml

spack.package_prefs.get_packages_config()

Wrapper around get_packages_config() to validate semantics.

spack.package_prefs.is_spec_buildable(spec)

Return true if the spec pkgspec is configured as buildable

spack.package_prefs.spec_externals(spec)

Return a list of external specs (w/external directory path filled in), one for each known external installation.

spack.package_test module

spack.package_test.compare_output(current_output, blessed_output)

Compare blessed and current output of executables.

spack.package_test.compare_output_file(current_output, blessed_output_file)

Same as above, but when the blessed output is given as a file.

spack.package_test.compile_c_and_execute(source_file, include_flags, link_flags)

Compile C @p source_file with @p include_flags and @p link_flags, run and return the output.

spack.parse module

exception spack.parse.LexError(message, string, pos)

Bases: spack.parse.ParseError

Raised when we don’t know how to lex something.

class spack.parse.Lexer(lexicon0, mode_switches_01=[], lexicon1=[], mode_switches_10=[])

Bases: object

Base class for Lexers that keep track of line numbers.

lex(text)
lex_word(word)
token(type, value='')
exception spack.parse.ParseError(message, string, pos)

Bases: spack.error.SpackError

Raised when we don’t hit an error while parsing.

class spack.parse.Parser(lexer)

Bases: object

Base class for simple recursive descent parsers.

accept(id)

Put the next symbol in self.token if accepted, then call gettok()

expect(id)

Like accept(), but fails if we don’t like the next token.

gettok()

Puts the next token in the input stream into self.next.

last_token_error(message)

Raise an error about the previous token in the stream.

next_token_error(message)

Raise an error about the next token in the stream.

parse(text)
push_tokens(iterable)

Adds all tokens in some iterable to the token stream.

setup(text)
unexpected_token()
class spack.parse.Token(type, value='', start=0, end=0)

Represents tokens; generated from input by lexer and fed to parse().

is_a(type)

spack.patch module

class spack.patch.FilePatch(pkg, path_or_url, level, working_dir)

Bases: spack.patch.Patch

Describes a patch that is retrieved from a file in the repository

sha256
exception spack.patch.NoSuchPatchError(message, long_message=None)

Bases: spack.error.SpackError

Raised when a patch file doesn’t exist.

class spack.patch.Patch(path_or_url, level, working_dir)

Bases: object

Base class to describe a patch that needs to be applied to some expanded source code.

apply(stage)

Apply the patch at self.path to the source code in the supplied stage

Parameters:stage – stage for the package that needs to be patched
static create(pkg, path_or_url, level=1, working_dir='.', **kwargs)

Factory method that creates an instance of some class derived from Patch

Parameters:
  • pkg – package that needs to be patched
  • path_or_url – path or url where the patch is found
  • level – patch level (default 1)
  • working_dir (str) – dir to change to before applying (default ‘.’)
Returns:

instance of some Patch class

exception spack.patch.PatchDirectiveError(message, long_message=None)

Bases: spack.error.SpackError

Raised when the wrong arguments are suppled to the patch directive.

class spack.patch.UrlPatch(path_or_url, level, working_dir, **kwargs)

Bases: spack.patch.Patch

Describes a patch that is retrieved from a URL

apply(stage)

Retrieve the patch in a temporary stage, computes self.path and calls super().apply(stage)

Parameters:stage – stage for the package that needs to be patched
spack.patch.absolute_path_for_package(pkg)

Returns the absolute path to the package.py file implementing the recipe for the package passed as argument.

Parameters:pkg – a valid package object, or a Dependency object.

spack.provider_index module

The virtual module contains utility classes for virtual dependencies.

class spack.provider_index.ProviderIndex(specs=None, restrict=False)

Bases: object

This is a dict of dicts used for finding providers of particular virtual dependencies. The dict of dicts looks like:

{ vpkg name :
{ full vpkg spec : set(packages providing spec) } }

Callers can use this to first find which packages provide a vpkg, then find a matching full spec. e.g., in this scenario:

{ ‘mpi’ :
{ mpi@:1.1 : set([mpich]),
mpi@:2.3 : set([mpich2@1.9:]) } }

Calling providers_for(spec) will find specs that provide a matching implementation of MPI.

copy()

Deep copy of this ProviderIndex.

static from_yaml(stream)
merge(other)

Merge other ProviderIndex into this one.

providers_for(*vpkg_specs)

Gives specs of all packages that provide virtual packages with the supplied specs.

remove_provider(pkg_name)

Remove a provider from the ProviderIndex.

satisfies(other)

Check that providers of virtual specs are compatible.

to_yaml(stream=None)
update(spec)
exception spack.provider_index.ProviderIndexError(message, long_message=None)

Bases: spack.error.SpackError

Raised when there is a problem with a ProviderIndex.

spack.relocate module

spack.relocate.get_existing_elf_rpaths(path_name)

Return the RPATHS returned by patchelf –print-rpath path_name as a list of strings.

spack.relocate.get_filetype(path_name)

Return the output of file path_name as a string to identify file type.

spack.relocate.get_patchelf()

Builds and installs spack patchelf package on linux platforms using the first concretized spec. Returns the full patchelf binary path.

spack.relocate.get_relative_rpaths(path_name, orig_dir, orig_rpaths)

Replaces orig_dir with relative path from dirname(path_name) if an rpath in orig_rpaths contains orig_path. Prefixes $ORIGIN to relative paths and returns replacement rpaths.

spack.relocate.macho_get_paths(path_name)

Examines the output of otool -l path_name for these three fields: LC_ID_DYLIB, LC_LOAD_DYLIB, LC_RPATH and parses out the rpaths, dependiencies and library id. Returns these values.

spack.relocate.macho_make_paths_relative(path_name, old_dir, rpaths, deps, idpath)

Replace old_dir with relative path from dirname(path_name) in rpaths and deps; idpaths are replaced with @rpath/basebane(path_name); replacement are returned.

spack.relocate.macho_replace_paths(old_dir, new_dir, rpaths, deps, idpath)

Replace old_dir with new_dir in rpaths, deps and idpath and return replacements

spack.relocate.make_binary_relative(cur_path_names, orig_path_names, old_dir)

Make RPATHs relative to old_dir in given elf or mach-o files

spack.relocate.modify_elf_object(path_name, orig_rpath, new_rpath)

Replace orig_rpath with new_rpath in RPATH of elf object path_name

spack.relocate.modify_macho_object(cur_path, rpaths, deps, idpath, new_rpaths, new_deps, new_idpath)

Modify MachO binary path_name by replacing old_dir with new_dir or the relative path to spack install root. The old install dir in LC_ID_DYLIB is replaced with the new install dir using install_name_tool -id newid binary The old install dir in LC_LOAD_DYLIB is replaced with the new install dir using install_name_tool -change old new binary The old install dir in LC_RPATH is replaced with the new install dir using install_name_tool -rpath old new binary

spack.relocate.needs_binary_relocation(filetype)

Check whether the given filetype is a binary that may need relocation.

spack.relocate.needs_text_relocation(filetype)

Check whether the given filetype is text that may need relocation.

spack.relocate.relocate_binary(path_names, old_dir, new_dir)

Change old_dir to new_dir in RPATHs of elf or mach-o files

spack.relocate.relocate_text(path_names, old_dir, new_dir)

Replace old path with new path in text file path_name

spack.relocate.substitute_rpath(orig_rpath, topdir, new_root_path)

Replace topdir with new_root_path RPATH list orig_rpath

spack.repository module

exception spack.repository.BadRepoError(message, long_message=None)

Bases: spack.repository.RepoError

Raised when repo layout is invalid.

exception spack.repository.DuplicateRepoError(message, long_message=None)

Bases: spack.repository.RepoError

Raised when duplicate repos are added to a RepoPath.

exception spack.repository.FailedConstructorError(name, exc_type, exc_obj, exc_tb)

Bases: spack.repository.RepoError

Raised when a package’s class constructor fails.

class spack.repository.FastPackageChecker(packages_path)

Bases: _abcoll.Mapping

Cache that maps package names to the stats obtained on the ‘package.py’ files associated with them.

For each repository a cache is maintained at class level, and shared among all instances referring to it. Update of the global cache is done lazily during instance initialization.

packages_path = None

The path of the repository managed by this instance

exception spack.repository.InvalidNamespaceError(message, long_message=None)

Bases: spack.repository.RepoError

Raised when an invalid namespace is encountered.

exception spack.repository.NoRepoConfiguredError(message, long_message=None)

Bases: spack.repository.RepoError

Raised when there are no repositories configured.

class spack.repository.Repo(root, namespace='spack.pkg')

Bases: object

Class representing a package repository in the filesystem.

Each package repository must have a top-level configuration file called repo.yaml.

Currently, repo.yaml this must define:

namespace:
A Python namespace where the repository’s packages should live.
all_package_names()

Returns a sorted list of all package names in the Repo.

all_packages()

Iterator over all packages in the repository.

Use this with care, because loading packages is slow.

dirname_for_package_name(spec_like, *args, **kwargs)
dump_provenance(spec_like, *args, **kwargs)
exists(pkg_name)

Whether a package with the supplied name exists.

extensions_for(spec_like, *args, **kwargs)
filename_for_package_name(spec_like, *args, **kwargs)
find_module(fullname, path=None)

Python find_module import hook.

Returns this Repo if it can load the module; None if not.

get(spec_like, *args, **kwargs)
get_pkg_class(pkg_name)

Get the class for the package out of its module.

First loads (or fetches from cache) a module for the package. Then extracts the package class from the module according to Spack’s naming convention.

is_prefix(fullname)

True if fullname is a prefix of this Repo’s namespace.

is_virtual(pkg_name)

True if the package with this name is virtual, False otherwise.

load_module(fullname)

Python importer load hook.

Tries to load the module; raises an ImportError if it can’t.

packages_with_tags(*tags)
provider_index

A provider index with names specific to this repo.

providers_for(spec_like, *args, **kwargs)
purge()

Clear entire package instance cache.

real_name(import_name)

Allow users to import Spack packages using Python identifiers.

A python identifier might map to many different Spack package names due to hyphen/underscore ambiguity.

Easy example:
num3proxy -> 3proxy
Ambiguous:
foo_bar -> foo_bar, foo-bar
More ambiguous:
foo_bar_baz -> foo_bar_baz, foo-bar-baz, foo_bar-baz, foo-bar_baz
tag_index

A provider index with names specific to this repo.

exception spack.repository.RepoError(message, long_message=None)

Bases: spack.error.SpackError

Superclass for repository-related errors.

class spack.repository.RepoPath(*repo_dirs, **kwargs)

Bases: object

A RepoPath is a list of repos that function as one.

It functions exactly like a Repo, but it operates on the combined results of the Repos in its list instead of on a single package repository.

all_package_names()

Return all unique package names in all repositories.

all_packages()
dirname_for_package_name(pkg_name)
dump_provenance(spec_like, *args, **kwargs)
exists(pkg_name)

Whether package with the give name exists in the path’s repos.

Note that virtual packages do not “exist”.

extensions_for(spec_like, *args, **kwargs)
filename_for_package_name(pkg_name)
find_module(fullname, path=None)

Implements precedence for overlaid namespaces.

Loop checks each namespace in self.repos for packages, and also handles loading empty containing namespaces.

first_repo()

Get the first repo in precedence order.

get(spec_like, *args, **kwargs)
get_pkg_class(pkg_name)

Find a class for the spec’s package and return the class object.

get_repo(namespace, default=<object object>)

Get a repository by namespace.

Parameters:namespace – Look up this namespace in the RepoPath, and return it if found.

Optional Arguments:

default:

If default is provided, return it when the namespace isn’t found. If not, raise an UnknownNamespaceError.
is_virtual(pkg_name)

True if the package with this name is virtual, False otherwise.

load_module(fullname)

Handles loading container namespaces when necessary.

See Repo for how actual package modules are loaded.

packages_with_tags(*tags)
provider_index

Merged ProviderIndex from all Repos in the RepoPath.

providers_for(spec_like, *args, **kwargs)
put_first(repo)

Add repo first in the search path.

put_last(repo)

Add repo last in the search path.

remove(repo)

Remove a repo from the search path.

repo_for_pkg(spec)

Given a spec, get the repository for its package.

swap(other)

Convenience function to make swapping repositories easier.

This is currently used by mock tests. TODO: Maybe there is a cleaner way.

class spack.repository.SpackNamespace(namespace)

Bases: module

Allow lazy loading of modules.

class spack.repository.TagIndex

Bases: _abcoll.Mapping

Maps tags to list of packages.

static from_json(stream)
to_json(stream)
update_package(pkg_name)

Updates a package in the tag index.

Parameters:pkg_name (str) – name of the package to be removed from the index
exception spack.repository.UnknownEntityError(message, long_message=None)

Bases: spack.repository.RepoError

Raised when we encounter a package spack doesn’t have.

exception spack.repository.UnknownNamespaceError(namespace)

Bases: spack.repository.UnknownEntityError

Raised when we encounter an unknown namespace

exception spack.repository.UnknownPackageError(name, repo=None)

Bases: spack.repository.UnknownEntityError

Raised when we encounter a package spack doesn’t have.

spack.repository.create_repo(root, namespace=None)

Create a new repository in root with the specified namespace.

If the namespace is not provided, use basename of root. Return the canonicalized path and namespace of the created repository.

spack.resource module

Describes an optional resource needed for a build.

Typically a bunch of sources that can be built in-tree within another package to enable optional features.

class spack.resource.Resource(name, fetcher, destination, placement)

Bases: object

Represents an optional resource to be fetched by a package.

Aggregates a name, a fetcher, a destination and a placement.

spack.spec module

Spack allows very fine-grained control over how packages are installed and over how they are built and configured. To make this easy, it has its own syntax for declaring a dependence. We call a descriptor of a particular package configuration a “spec”.

The syntax looks like this:

$ spack install mpileaks ^openmpi @1.2:1.4 +debug %intel @12.1 =bgqos_0
                0        1        2        3      4      5     6

The first part of this is the command, ‘spack install’. The rest of the line is a spec for a particular installation of the mpileaks package.

  1. The package to install

  2. A dependency of the package, prefixed by ^

  3. A version descriptor for the package. This can either be a specific version, like “1.2”, or it can be a range of versions, e.g. “1.2:1.4”. If multiple specific versions or multiple ranges are acceptable, they can be separated by commas, e.g. if a package will only build with versions 1.0, 1.2-1.4, and 1.6-1.8 of mavpich, you could say:

    depends_on(“mvapich@1.0,1.2:1.4,1.6:1.8”)

  4. A compile-time variant of the package. If you need openmpi to be built in debug mode for your package to work, you can require it by adding +debug to the openmpi spec when you depend on it. If you do NOT want the debug option to be enabled, then replace this with -debug.

  5. The name of the compiler to build with.

  6. The versions of the compiler to build with. Note that the identifier for a compiler version is the same ‘@’ that is used for a package version. A version list denoted by ‘@’ is associated with the compiler only if if it comes immediately after the compiler name. Otherwise it will be associated with the current package spec.

  7. The architecture to build with. This is needed on machines where cross-compilation is required

Here is the EBNF grammar for a spec:

spec-list    = { spec [ dep-list ] }
dep_list     = { ^ spec }
spec         = id [ options ]
options      = { @version-list | +variant | -variant | ~variant |
                 %compiler | arch=architecture | [ flag ]=value}
flag         = { cflags | cxxflags | fcflags | fflags | cppflags |
                 ldflags | ldlibs }
variant      = id
architecture = id
compiler     = id [ version-list ]
version-list = version [ { , version } ]
version      = id | id: | :id | id:id
id           = [A-Za-z0-9_][A-Za-z0-9_.-]*

Identifiers using the <name>=<value> command, such as architectures and compiler flags, require a space before the name.

There is one context-sensitive part: ids in versions may contain ‘.’, while other ids may not.

There is one ambiguity: since ‘-‘ is allowed in an id, you need to put whitespace space before -variant for it to be tokenized properly. You can either use whitespace, or you can just use ~variant since it means the same thing. Spack uses ~variant in directory names and in the canonical form of specs to avoid ambiguity. Both are provided because ~ can cause shell expansion when it is the first character in an id typed on the command line.

class spack.spec.Spec(spec_like, **kwargs)

Bases: object

cformat(*args, **kwargs)

Same as format, but color defaults to auto instead of False.

colorized()
common_dependencies(other)

Return names of dependencies that self an other have in common.

concrete

A spec is concrete if it describes a single build of a package.

More formally, a spec is concrete if concretize() has been called on it and it has been marked _concrete.

Concrete specs either can be or have been built. All constraints have been resolved, optional dependencies have been added or removed, a compiler has been chosen, and all variants have values.

concretize()

A spec is concrete if it describes one build of a package uniquely. This will ensure that this spec is concrete.

If this spec could describe more than one version, variant, or build of a package, this will add constraints to make it concrete.

Some rigorous validation and checks are also performed on the spec. Concretizing ensures that it is self-consistent and that it’s consistent with requirements of its pacakges. See flatten() and normalize() for more details on this.

It also ensures that:

for x in self.traverse():
    assert x.package.spec == x

which may not be true during the concretization step.

concretized()

This is a non-destructive version of concretize(). First clones, then returns a concrete version of this package without modifying this package.

constrain(other, deps=True)

Merge the constraints of other with self.

Returns True if the spec changed as a result, False if not.

constrained(other, deps=True)

Return a constrained copy without modifying this spec.

copy(deps=True, **kwargs)

Make a copy of this spec.

Parameters:
  • deps (bool or tuple) – Defaults to True. If boolean, controls whether dependencies are copied (copied if True). If a tuple is provided, only dependencies of types matching those in the tuple are copied.
  • kwargs – additional arguments for internal use (passed to _dup).
Returns:

A copy of this spec.

Examples

Deep copy with dependnecies:

spec.copy()
spec.copy(deps=True)

Shallow copy (no dependencies):

spec.copy(deps=False)

Only build and run dependencies:

deps=('build', 'run'):
cshort_spec

Returns an auto-colorized version of self.short_spec.

dag_hash(length=None)

Return a hash of the entire spec DAG, including connectivity.

dag_hash_bit_prefix(bits)

Get the first <bits> bits of the DAG hash as an integer type.

dep_difference(other)

Returns dependencies in self that are not in other.

dep_string()
dependencies(deptype='all')
dependencies_dict(deptype='all')
dependents(deptype='all')
dependents_dict(deptype='all')
eq_dag(other, deptypes=True)

True if the full dependency DAGs of specs are equal.

eq_node(other)

Equality with another spec, not including dependencies.

external
flat_dependencies(**kwargs)

Return a DependencyMap containing all of this spec’s dependencies with their constraints merged.

If copy is True, returns merged copies of its dependencies without modifying the spec it’s called on.

If copy is False, clears this spec’s dependencies and returns them.

format(format_string='$_$@$%@+$+$=', **kwargs)

Prints out particular pieces of a spec, depending on what is in the format string.

The format strings you can provide are:

$_   Package name
$.   Full package name (with namespace)
$@   Version with '@' prefix
$%   Compiler with '%' prefix
$%@  Compiler with '%' prefix & compiler version with '@' prefix
$%+  Compiler with '%' prefix & compiler flags prefixed by name
$%@+ Compiler, compiler version, and compiler flags with same
     prefixes as above
$+   Options
$=   Architecture prefixed by 'arch='
$/   7-char prefix of DAG hash with '-' prefix
$$   $

You can also use full-string versions, which elide the prefixes:

${PACKAGE}       Package name
${VERSION}       Version
${COMPILER}      Full compiler string
${COMPILERNAME}  Compiler name
${COMPILERVER}   Compiler version
${COMPILERFLAGS} Compiler flags
${OPTIONS}       Options
${ARCHITECTURE}  Architecture
${SHA1}          Dependencies 8-char sha1 prefix
${HASH:len}      DAG hash with optional length specifier

${SPACK_ROOT}    The spack root directory
${SPACK_INSTALL} The default spack install directory,
                 ${SPACK_PREFIX}/opt
${PREFIX}        The package prefix

Note these are case-insensitive: for example you can specify either ${PACKAGE} or ${package}.

Optionally you can provide a width, e.g. $20_ for a 20-wide name. Like printf, you can provide ‘-‘ for left justification, e.g. $-20_ for a left-justified name.

Anything else is copied verbatim into the output stream.

Parameters:
  • format_string (str) – string containing the format to be expanded
  • **kwargs (dict) –

    the following list of keywords is supported

    • color (bool): True if returned string is colored
    • transform (dict): maps full-string formats to a callable that accepts a string and returns another one

Examples

The following line:

s = spec.format('$_$@$+')

translates to the name, version, and options of the package, but no dependencies, arch, or compiler.

TODO: allow, e.g., $6# to customize short hash length TODO: allow, e.g., $// for full hash.

static from_dict(data)

Construct a spec from YAML.

Parameters: data – a nested dict/list data structure read from YAML or JSON.

static from_json(stream)

Construct a spec from JSON.

Parameters: stream – string or file object to read from.

static from_literal(spec_dict, normal=True)

Builds a Spec from a dictionary containing the spec literal.

The dictionary must have a single top level key, representing the root, and as many secondary level keys as needed in the spec.

The keys can be either a string or a Spec or a tuple containing the Spec and the dependency types.

Parameters:
  • spec_dict (dict) – the dictionary containing the spec literal
  • normal (bool) – if True the same key appearing at different levels of the spec_dict will map to the same object in memory.

Examples

A simple spec foo with no dependencies:

{'foo': None}

A spec foo with a (build, link) dependency bar:

{'foo':
    {'bar:build,link': None}}

A spec with a diamond dependency and various build types:

{'dt-diamond': {
    'dt-diamond-left:build,link': {
        'dt-diamond-bottom:build': None
    },
    'dt-diamond-right:build,link': {
        'dt-diamond-bottom:build,link,run': None
    }
}}

The same spec with a double copy of dt-diamond-bottom and no diamond structure:

{'dt-diamond': {
    'dt-diamond-left:build,link': {
        'dt-diamond-bottom:build': None
    },
    'dt-diamond-right:build,link': {
        'dt-diamond-bottom:build,link,run': None
    }
}, normal=False}

Constructing a spec using a Spec object as key:

mpich = Spec('mpich')
libelf = Spec('libelf@1.8.11')
expected_normalized = Spec.from_literal({
    'mpileaks': {
        'callpath': {
            'dyninst': {
                'libdwarf': {libelf: None},
                libelf: None
            },
            mpich: None
        },
        mpich: None
    },
})
static from_node_dict(node)
static from_yaml(stream)

Construct a spec from YAML.

Parameters: stream – string or file object to read from.

fullname
get_dependency(name)
index(deptype='all')

Return DependencyMap that points to all the dependencies in this spec.

static is_virtual(name)

Test if a name is virtual without requiring a Spec.

ne_dag(other, deptypes=True)

True if the full dependency DAGs of specs are not equal.

ne_node(other)

Inequality with another spec, not including dependencies.

normalize(force=False)

When specs are parsed, any dependencies specified are hanging off the root, and ONLY the ones that were explicitly provided are there. Normalization turns a partial flat spec into a DAG, where:

  1. Known dependencies of the root package are in the DAG.
  2. Each node’s dependencies dict only contains its known direct deps.
  3. There is only ONE unique spec for each package in the DAG.
    • This includes virtual packages. If there a non-virtual package that provides a virtual package that is in the spec, then we replace the virtual package with the non-virtual one.

TODO: normalize should probably implement some form of cycle detection, to ensure that the spec is actually a DAG.

normalized()

Return a normalized copy of this spec without modifying this spec.

package
package_class

Internal package call gets only the class object for a package. Use this to just get package metadata.

patches

Return patch objects for any patch sha256 sums on this Spec.

This is for use after concretization to iterate over any patches associated with this spec.

TODO: this only checks in the package; it doesn’t resurrect old patches from install directories, but it probably should.

prefix
static read_yaml_dep_specs(dependency_dict)

Read the DependencySpec portion of a YAML-formatted Spec.

This needs to be backward-compatible with older spack spec formats so that reindex will work on old specs/databases.

root

Follow dependent links and find the root of this spec’s DAG.

Spack specs have a single root (the package being installed).

satisfies(other, deps=True, strict=False, strict_deps=False)

Determine if this spec satisfies all constraints of another.

There are two senses for satisfies:

  • loose (default): the absence of a constraint in self implies that it could be satisfied by other, so we only check that there are no conflicts with other for constraints that this spec actually has.
  • strict: strict means that we must meet all the constraints specified on other.
satisfies_dependencies(other, strict=False)

This checks constraints on common dependencies against each other.

short_spec

Returns a version of the spec with the dependencies hashed instead of completely enumerated.

sorted_deps()

Return a list of all dependencies sorted by name.

to_dict()
to_json(stream=None)
to_node_dict()
to_yaml(stream=None)
traverse(**kwargs)
traverse_edges(visited=None, d=0, deptype='all', dep_spec=None, **kwargs)

Generic traversal of the DAG represented by this spec. This will yield each node in the spec. Options:

order [=pre|post]

Order to traverse spec nodes. Defaults to preorder traversal. Options are:

‘pre’: Pre-order traversal; each node is yielded before its
children in the dependency DAG.
‘post’: Post-order traversal; each node is yielded after its
children in the dependency DAG.
cover [=nodes|edges|paths]

Determines how extensively to cover the dag. Possible values:

‘nodes’: Visit each node in the dag only once. Every node
yielded by this function will be unique.
‘edges’: If a node has been visited once but is reached along a
new path from the root, yield it but do not descend into it. This traverses each ‘edge’ in the DAG once.
‘paths’: Explore every unique path reachable from the root.
This descends into visited subtrees and will yield nodes twice if they’re reachable by multiple paths.
depth [=False]
Defaults to False. When True, yields not just nodes in the spec, but also their depth from the root in a (depth, node) tuple.
key [=id]
Allow a custom key function to track the identity of nodes in the traversal.
root [=True]
If False, this won’t yield the root node, just its descendents.
direction [=children|parents]
If ‘children’, does a traversal of this spec’s children. If ‘parents’, traverses upwards in the DAG towards the root.
tree(**kwargs)

Prints out this spec and its dependencies, tree-formatted with indentation.

validate_or_raise()

Checks that names and values in this spec are real. If they’re not, it will raise an appropriate exception.

version
virtual

Right now, a spec is virtual if no package exists with its name.

TODO: revisit this – might need to use a separate namespace and be more explicit about this. Possible idea: just use conventin and make virtual deps all caps, e.g., MPI vs mpi.

virtual_dependencies()

Return list of any virtual deps in this spec.

spack.spec.parse(string)

Returns a list of specs from an input string. For creating one spec, see Spec() constructor.

spack.spec.parse_anonymous_spec(spec_like, pkg_name)

Allow the user to omit the package name part of a spec if they know what it has to be already.

e.g., provides(‘mpi@2’, when=’@1.9:’) says that this package provides MPI-3 when its version is higher than 1.9.

exception spack.spec.SpecError(message, long_message=None)

Bases: spack.error.SpackError

Superclass for all errors that occur while constructing specs.

exception spack.spec.SpecParseError(parse_error)

Bases: spack.error.SpecError

Wrapper for ParseError for when we’re parsing specs.

exception spack.spec.DuplicateDependencyError(message, long_message=None)

Bases: spack.error.SpecError

Raised when the same dependency occurs in a spec twice.

exception spack.spec.DuplicateVariantError(message, long_message=None)

Bases: spack.error.SpecError

Raised when the same variant occurs in a spec twice.

exception spack.spec.DuplicateCompilerSpecError(message, long_message=None)

Bases: spack.error.SpecError

Raised when the same compiler occurs in a spec twice.

exception spack.spec.UnsupportedCompilerError(compiler_name)

Bases: spack.error.SpecError

Raised when the user asks for a compiler spack doesn’t know about.

exception spack.spec.UnknownVariantError(pkg, variant)

Bases: spack.error.SpecError

Raised when an unknown variant occurs in a spec.

exception spack.spec.DuplicateArchitectureError(message, long_message=None)

Bases: spack.error.SpecError

Raised when the same architecture occurs in a spec twice.

exception spack.spec.InconsistentSpecError(message, long_message=None)

Bases: spack.error.SpecError

Raised when two nodes in the same spec DAG have inconsistent constraints.

exception spack.spec.InvalidDependencyError(message, long_message=None)

Bases: spack.error.SpecError

Raised when a dependency in a spec is not actually a dependency of the package.

exception spack.spec.NoProviderError(vpkg)

Bases: spack.error.SpecError

Raised when there is no package that provides a particular virtual dependency.

exception spack.spec.MultipleProviderError(vpkg, providers)

Bases: spack.error.SpecError

Raised when there is no package that provides a particular virtual dependency.

exception spack.spec.UnsatisfiableSpecError(provided, required, constraint_type)

Bases: spack.error.SpecError

Raised when a spec conflicts with package constraints. Provide the requirement that was violated when raising.

exception spack.spec.UnsatisfiableSpecNameError(provided, required)

Bases: spack.error.UnsatisfiableSpecError

Raised when two specs aren’t even for the same package.

exception spack.spec.UnsatisfiableVersionSpecError(provided, required)

Bases: spack.error.UnsatisfiableSpecError

Raised when a spec version conflicts with package constraints.

exception spack.spec.UnsatisfiableCompilerSpecError(provided, required)

Bases: spack.error.UnsatisfiableSpecError

Raised when a spec comiler conflicts with package constraints.

exception spack.spec.UnsatisfiableVariantSpecError(provided, required)

Bases: spack.error.UnsatisfiableSpecError

Raised when a spec variant conflicts with package constraints.

exception spack.spec.UnsatisfiableCompilerFlagSpecError(provided, required)

Bases: spack.error.UnsatisfiableSpecError

Raised when a spec variant conflicts with package constraints.

exception spack.spec.UnsatisfiableArchitectureSpecError(provided, required)

Bases: spack.error.UnsatisfiableSpecError

Raised when a spec architecture conflicts with package constraints.

exception spack.spec.UnsatisfiableProviderSpecError(provided, required)

Bases: spack.error.UnsatisfiableSpecError

Raised when a provider is supplied but constraints don’t match a vpkg requirement

exception spack.spec.UnsatisfiableDependencySpecError(provided, required)

Bases: spack.error.UnsatisfiableSpecError

Raised when some dependency of constrained specs are incompatible

exception spack.spec.AmbiguousHashError(msg, *specs)

Bases: spack.error.SpecError

exception spack.spec.InvalidHashError(spec, hash)

Bases: spack.error.SpecError

exception spack.spec.NoSuchHashError(hash)

Bases: spack.error.SpecError

exception spack.spec.RedundantSpecError(spec, addition)

Bases: spack.error.SpecError

spack.stage module

class spack.stage.DIYStage(path)

Bases: object

Simple class that allows any directory to be a spack stage.

cache_local()
check()
create()
destroy()
expand_archive()
fetch(*args, **kwargs)
restage()
class spack.stage.ResourceStage(url_or_fetch_strategy, root, resource, **kwargs)

Bases: spack.stage.Stage

expand_archive()
restage()
exception spack.stage.RestageError(message, long_message=None)

Bases: spack.stage.StageError

“Error encountered during restaging.

class spack.stage.Stage(url_or_fetch_strategy, name=None, mirror_path=None, keep=False, path=None, lock=True, search_fn=None)

Bases: object

Manages a temporary stage directory for building.

A Stage object is a context manager that handles a directory where some source code is downloaded and built before being installed. It handles fetching the source code, either as an archive to be expanded or by checking it out of a repository. A stage’s lifecycle looks like this:

with Stage() as stage:      # Context manager creates and destroys the
                            # stage directory
    stage.fetch()           # Fetch a source archive into the stage.
    stage.expand_archive()  # Expand the source archive.
    <install>               # Build and install the archive.
                            # (handled by user of Stage)

When used as a context manager, the stage is automatically destroyed if no exception is raised by the context. If an excpetion is raised, the stage is left in the filesystem and NOT destroyed, for potential reuse later.

You can also use the stage’s create/destroy functions manually, like this:

stage = Stage()
try:
    stage.create()          # Explicitly create the stage directory.
    stage.fetch()           # Fetch a source archive into the stage.
    stage.expand_archive()  # Expand the source archive.
    <install>               # Build and install the archive.
                            # (handled by user of Stage)
finally:
    stage.destroy()         # Explicitly destroy the stage directory.

If spack.use_tmp_stage is True, spack will attempt to create stages in a tmp directory. Otherwise, stages are created directly in spack.stage_path.

There are two kinds of stages: named and unnamed. Named stages can persist between runs of spack, e.g. if you fetched a tarball but didn’t finish building it, you won’t have to fetch it again.

Unnamed stages are created using standard mkdtemp mechanisms or similar, and are intended to persist for only one run of spack.

archive_file

Path to the source archive within this stage directory.

cache_local()
check()

Check the downloaded archive against a checksum digest. No-op if this stage checks code out of a repository.

create()

Creates the stage directory.

If get_tmp_root() is None, the stage directory is created directly under spack.stage_path, otherwise this will attempt to create a stage in a temporary directory and link it into spack.stage_path.

Spack will use the first writable location in spack.tmp_dirs to create a stage. If there is no valid location in tmp_dirs, fall back to making the stage inside spack.stage_path.

destroy()

Removes this stage directory.

expand_archive()

Changes to the stage directory and attempt to expand the downloaded archive. Fail if the stage is not set up or if the archive is not yet downloaded.

expected_archive_files

Possible archive file paths.

fetch(mirror_only=False)

Downloads an archive or checks out code from a repository.

restage()

Removes the expanded archive path if it exists, then re-expands the archive.

save_filename
source_path

Returns the path to the expanded/checked out source code.

To find the source code, this method searches for the first subdirectory of the stage that it can find, and returns it. This assumes nothing besides the archive file will be in the stage path, but it has the advantage that we don’t need to know the name of the archive or its contents.

If the fetch strategy is not supposed to expand the downloaded file, it will just return the stage path. If the archive needs to be expanded, it will return None when no archive is found.

stage_locks = {}
exception spack.stage.StageError(message, long_message=None)

Bases: spack.error.SpackError

“Superclass for all errors encountered during staging.

spack.stage.ensure_access(file='/home/docs/checkouts/readthedocs.org/user_builds/spack/checkouts/v0.11.0/var/spack/stage')

Ensure we can access a directory and die with an error if we can’t.

spack.stage.get_tmp_root()
spack.stage.purge()

Remove all build directories in the top-level stage path.

spack.store module

Components that manage Spack’s installation tree.

An install tree, or “build store” consists of two parts:

  1. A package database that tracks what is installed.
  2. A directory layout that determines how the installations are laid out.

The store contains all the install prefixes for packages installed by Spack. The simplest store could just contain prefixes named by DAG hash, but we use a fancier directory layout to make browsing the store and debugging easier.

The directory layout is currently hard-coded to be a YAMLDirectoryLayout, so called because it stores build metadata within each prefix, in spec.yaml files. In future versions of Spack we may consider allowing install trees to define their own layouts with some per-tree configuration.

spack.tengine module

class spack.tengine.Context

Bases: object

Base class for context classes that are used with the template engine.

context_properties = []
to_dict()

Returns a dictionary containing all the context properties.

class spack.tengine.ContextMeta

Bases: type

Meta class for Context. It helps reducing the boilerplate in client code.

classmethod context_property(mcs, func)

Decorator that adds a function name to the list of new context properties, and then returns a property.

spack.tengine.context_property = <bound method type.context_property of <class 'spack.tengine.ContextMeta'>>

A saner way to use the decorator

spack.tengine.make_environment(dirs=None)

Returns an configured environment for template rendering.

spack.tengine.prepend_to_line(text, token)

Prepends a token to each line in text

spack.tengine.quote(text)

Quotes each line in text

spack.url module

This module has methods for parsing names and versions of packages from URLs. The idea is to allow package creators to supply nothing more than the download location of the package, and figure out version and name information from there.

Example: when spack is given the following URL:

It can figure out that the package name is hdf, and that it is at version 4.2.12. This is useful for making the creation of packages simple: a user just supplies a URL and skeleton code is generated automatically.

Spack can also figure out that it can most likely download 4.2.6 at this URL:

This is useful if a user asks for a package at a particular version number; spack doesn’t need anyone to tell it where to get the tarball even though it’s never been told about that version before.

exception spack.url.UndetectableNameError(path)

Bases: spack.url.UrlParseError

Raised when we can’t parse a package name from a string.

exception spack.url.UndetectableVersionError(path)

Bases: spack.url.UrlParseError

Raised when we can’t parse a version from a string.

exception spack.url.UrlParseError(msg, path)

Bases: spack.error.SpackError

Raised when the URL module can’t parse something correctly.

spack.url.color_url(path, **kwargs)

Color the parts of the url according to Spack’s parsing.

Colors are:
Cyan: The version found by parse_version_offset().
Red: The name found by parse_name_offset().
Green: Instances of version string from substitute_version().
Magenta: Instances of the name (protected from substitution).
Parameters:
  • path (str) – The filename or URL for the package
  • errors (bool) – Append parse errors at end of string.
  • subs (bool) – Color substitutions as well as parsed name/version.
spack.url.cumsum(elts, init=0, fn=<function <lambda>>)

Return cumulative sum of result of fn on each element in elts.

spack.url.determine_url_file_extension(path)

This returns the type of archive a URL refers to. This is sometimes confusing because of URLs like:

  1. https://github.com/petdance/ack/tarball/1.93_02

Where the URL doesn’t actually contain the filename. We need to know what type it is so that we can appropriately name files in mirrors.

spack.url.find_all(substring, string)

Returns a list containing the indices of every occurrence of substring in string.

spack.url.find_list_url(url)

Finds a good list URL for the supplied URL.

By default, returns the dirname of the archive path.

Provides special treatment for the following websites, which have a unique list URL different from the dirname of the download URL:

GitHub https://github.com/<repo>/<name>/releases
GitLab https://gitlab.*/<repo>/<name>/tags
BitBucket https://bitbucket.org/<repo>/<name>/downloads/?tab=tags
CRAN https://*.r-project.org/src/contrib/Archive/<name>
Parameters:url (str) – The download URL for the package
Returns:The list URL for the package
Return type:str
spack.url.insensitize(string)

Change upper and lowercase letters to be case insensitive in the provided string. e.g., ‘a’ becomes ‘[Aa]’, ‘B’ becomes ‘[bB]’, etc. Use for building regexes.

spack.url.parse_name(path, ver=None)

Try to determine the name of a package from its filename or URL.

Parameters:
  • path (str) – The filename or URL for the package
  • ver (str) – The version of the package
Returns:

The name of the package

Return type:

str

Raises:

UndetectableNameError – If the URL does not match any regexes

spack.url.parse_name_and_version(path)

Try to determine the name of a package and extract its version from its filename or URL.

Parameters:

path (str) – The filename or URL for the package

Returns:

The name of the package The version of the package

Return type:

tuple of (str, Version)A tuple containing

Raises:
spack.url.parse_name_offset(path, v=None)

Try to determine the name of a package from its filename or URL.

Parameters:
  • path (str) – The filename or URL for the package
  • v (str) – The version of the package
Returns:

A tuple containing:

name of the package, first index of name, length of name, the index of the matching regex the matching regex

Return type:

tuple of (str, int, int, int, str)

Raises:

UndetectableNameError – If the URL does not match any regexes

spack.url.parse_version(path)

Try to extract a version string from a filename or URL.

Parameters:path (str) – The filename or URL for the package
Returns:The version of the package
Return type:spack.version.Version
Raises:UndetectableVersionError – If the URL does not match any regexes
spack.url.parse_version_offset(path)

Try to extract a version string from a filename or URL.

Parameters:path (str) – The filename or URL for the package
Returns:
A tuple containing:
version of the package, first index of version, length of version string, the index of the matching regex the matching regex
Return type:tuple of (Version, int, int, int, str)
Raises:UndetectableVersionError – If the URL does not match any regexes
spack.url.split_url_extension(path)

Some URLs have a query string, e.g.:

  1. https://github.com/losalamos/CLAMR/blob/packages/PowerParser_v2.0.7.tgz?raw=true
  2. http://www.apache.org/dyn/closer.cgi?path=/cassandra/1.2.0/apache-cassandra-1.2.0-rc2-bin.tar.gz
  3. https://gitlab.kitware.com/vtk/vtk/repository/archive.tar.bz2?ref=v7.0.0

In (1), the query string needs to be stripped to get at the extension, but in (2) & (3), the filename is IN a single final query argument.

This strips the URL into three pieces: prefix, ext, and suffix. The suffix contains anything that was stripped off the URL to get at the file extension. In (1), it will be '?raw=true', but in (2), it will be empty. In (3) the suffix is a parameter that follows after the file extension, e.g.:

  1. ('https://github.com/losalamos/CLAMR/blob/packages/PowerParser_v2.0.7', '.tgz', '?raw=true')
  2. ('http://www.apache.org/dyn/closer.cgi?path=/cassandra/1.2.0/apache-cassandra-1.2.0-rc2-bin', '.tar.gz', None)
  3. ('https://gitlab.kitware.com/vtk/vtk/repository/archive', '.tar.bz2', '?ref=v7.0.0')
spack.url.strip_name_suffixes(path, version)

Most tarballs contain a package name followed by a version number. However, some also contain extraneous information in-between the name and version:

  • rgb-1.0.6
  • converge_install_2.3.16
  • jpegsrc.v9b

These strings are not part of the package name and should be ignored. This function strips the version number and any extraneous suffixes off and returns the remaining string. The goal is that the name is always the last thing in path:

  • rgb
  • converge
  • jpeg
Parameters:
  • path (str) – The filename or URL for the package
  • version (str) – The version detected for this URL
Returns:

The path with any extraneous suffixes removed

Return type:

str

spack.url.strip_query_and_fragment(path)
spack.url.strip_version_suffixes(path)

Some tarballs contain extraneous information after the version:

  • bowtie2-2.2.5-source
  • libevent-2.0.21-stable
  • cuda_8.0.44_linux.run

These strings are not part of the version number and should be ignored. This function strips those suffixes off and returns the remaining string. The goal is that the version is always the last thing in path:

  • bowtie2-2.2.5
  • libevent-2.0.21
  • cuda_8.0.44
Parameters:path (str) – The filename or URL for the package
Returns:The path with any extraneous suffixes removed
Return type:str
spack.url.substitute_version(path, new_version)

Given a URL or archive name, find the version in the path and substitute the new version for it. Replace all occurrences of the version if they don’t overlap with the package name.

Simple example:

substitute_version('http://www.mr511.de/software/libelf-0.8.13.tar.gz', '2.9.3')
>>> 'http://www.mr511.de/software/libelf-2.9.3.tar.gz'

Complex example:

substitute_version('https://www.hdfgroup.org/ftp/HDF/releases/HDF4.2.12/src/hdf-4.2.12.tar.gz', '2.3')
>>> 'https://www.hdfgroup.org/ftp/HDF/releases/HDF2.3/src/hdf-2.3.tar.gz'
spack.url.substitution_offsets(path)

This returns offsets for substituting versions and names in the provided path. It is a helper for substitute_version().

spack.url.wildcard_version(path)

Find the version in the supplied path, and return a regular expression that will match this path with any version in its place.

spack.variant module

The variant module contains data structures that are needed to manage variants both in packages and in specs.

class spack.variant.AbstractVariant(name, value)

Bases: object

A variant that has not yet decided who it wants to be. It behaves like a multi valued variant which could do things.

This kind of variant is generated during parsing of expressions like foo=bar and differs from multi valued variants because it will satisfy any other variant with the same name. This is because it could do it if it grows up to be a multi valued variant with the right set of values.

compatible(other)

Returns True if self and other are compatible, False otherwise.

As there is no semantic check, two VariantSpec are compatible if either they contain the same value or they are both multi-valued.

Parameters:other – instance against which we test compatibility
Returns:True or False
Return type:bool
constrain(other)

Modify self to match all the constraints for other if both instances are multi-valued. Returns True if self changed, False otherwise.

Parameters:other – instance against which we constrain self
Returns:True or False
Return type:bool
copy()

Returns an instance of a variant equivalent to self

Returns:a copy of self
Return type:any variant type
>>> a = MultiValuedVariant('foo', True)
>>> b = a.copy()
>>> assert a == b
>>> assert a is not b
static from_node_dict(name, value)

Reconstruct a variant from a node dict.

satisfies(other)

Returns true if other.name == self.name, because any value that other holds and is not in self yet could be added.

Parameters:other – constraint to be met for the method to return True
Returns:True or False
Return type:bool
value

Returns a tuple of strings containing the values stored in the variant.

Returns:values stored in the variant
Return type:tuple of str
yaml_entry()

Returns a key, value tuple suitable to be an entry in a yaml dict.

Returns:(name, value_representation)
Return type:tuple
class spack.variant.BoolValuedVariant(name, value)

Bases: spack.variant.SingleValuedVariant

A variant that can hold either True or False.

exception spack.variant.DuplicateVariantError(message, long_message=None)

Bases: spack.error.SpecError

Raised when the same variant occurs in a spec twice.

exception spack.variant.InconsistentValidationError(vspec, variant)

Bases: spack.error.SpecError

Raised if the wrong validator is used to validate a variant.

exception spack.variant.InvalidVariantValueError(variant, invalid_values, pkg)

Bases: spack.error.SpecError

Raised when a valid variant has at least an invalid value.

class spack.variant.MultiValuedVariant(name, value)

Bases: spack.variant.AbstractVariant

A variant that can hold multiple values at once.

satisfies(other)

Returns true if other.name == self.name and other.value is a strict subset of self. Does not try to validate.

Parameters:other – constraint to be met for the method to return True
Returns:True or False
Return type:bool
exception spack.variant.MultipleValuesInExclusiveVariantError(variant, pkg)

Bases: spack.error.SpecError, exceptions.ValueError

Raised when multiple values are present in a variant that wants only one.

class spack.variant.SingleValuedVariant(name, value)

Bases: spack.variant.MultiValuedVariant

A variant that can hold multiple values, but one at a time.

compatible(other)
constrain(other)
satisfies(other)
yaml_entry()
exception spack.variant.UnknownVariantError(pkg, variant)

Bases: spack.error.SpecError

Raised when an unknown variant occurs in a spec.

exception spack.variant.UnsatisfiableVariantSpecError(provided, required)

Bases: spack.error.UnsatisfiableSpecError

Raised when a spec variant conflicts with package constraints.

class spack.variant.Variant(name, default, description, values=(True, False), multi=False, validator=None)

Bases: object

Represents a variant in a package, as declared in the variant directive.

allowed_values

Returns a string representation of the allowed values for printing purposes

Returns:representation of the allowed values
Return type:str
make_default()

Factory that creates a variant holding the default value.

Returns:instance of the proper variant
Return type:MultiValuedVariant or SingleValuedVariant or BoolValuedVariant
make_variant(value)

Factory that creates a variant holding the value passed as a parameter.

Parameters:value – value that will be hold by the variant
Returns:instance of the proper variant
Return type:MultiValuedVariant or SingleValuedVariant or BoolValuedVariant
validate_or_raise(vspec, pkg=None)

Validate a variant spec against this package variant. Raises an exception if any error is found.

Parameters:
  • vspec (VariantSpec) – instance to be validated
  • pkg (Package) – the package that required the validation, if available
Raises:
variant_cls

Proper variant class to be used for this configuration.

class spack.variant.VariantMap(spec)

Bases: llnl.util.lang.HashableMap

Map containing variant instances. New values can be added only if the key is not already present.

concrete

Returns True if the spec is concrete in terms of variants.

Returns:True or False
Return type:bool
constrain(other)

Add all variants in other that aren’t in self to self. Also constrain all multi-valued variants that are already present. Return True if self changed, False otherwise

Parameters:other (VariantMap) – instance against which we constrain self
Returns:True or False
Return type:bool
copy()

Return an instance of VariantMap equivalent to self.

Returns:a copy of self
Return type:VariantMap
satisfies(other, strict=False)

Returns True if this VariantMap is more constrained than other, False otherwise.

Parameters:
  • other (VariantMap) – VariantMap instance to satisfy
  • strict (bool) – if True return False if a key is in other and not in self, otherwise discard that key and proceed with evaluation
Returns:

True or False

Return type:

bool

substitute(vspec)

Substitutes the entry under vspec.name with vspec.

Parameters:vspec – variant spec to be substituted
spack.variant.implicit_variant_conversion(method)

Converts other to type(self) and calls method(self, other)

Parameters:method – any predicate method that takes another variant as an argument

Returns: decorated method

spack.variant.substitute_abstract_variants(spec)

Uses the information in spec.package to turn any variant that needs it into a SingleValuedVariant.

Parameters:spec – spec on which to operate the substitution

spack.version module

This module implements Version and version-ish objects. These are:

Version
A single version of a package.
VersionRange
A range of versions of a package.
VersionList
A list of Versions and VersionRanges.

All of these types support the following operations, which can be called on any of the types:

__eq__, __ne__, __lt__, __gt__, __ge__, __le__, __hash__
__contains__
satisfies
overlaps
union
intersection
concrete
class spack.version.Version(string)

Bases: object

Class to represent versions

concrete
dashed

The dashed representation of the version.

Example: >>> version = Version(‘1.2.3b’) >>> version.dashed Version(‘1-2-3b’)

Returns:The version with separator characters replaced by dashes
Return type:Version
dotted

The dotted representation of the version.

Example: >>> version = Version(‘1-2-3b’) >>> version.dotted Version(‘1.2.3b’)

Returns:The version with separator characters replaced by dots
Return type:Version
highest()
intersection(a, b, *args, **kwargs)
is_predecessor(other)

True if the other version is the immediate predecessor of this one. That is, NO versions v exist such that: (self < v < other and v not in self).

is_successor(other)
isdevelop()

Triggers on the special case of the @develop version.

isnumeric()

Tells if this version is numeric (vs. a non-numeric version). A version will be numeric as long as the first section of it is, even if it contains non-numerica portions.

Some numeric versions:
1 1.1 1.1a 1.a.1b
Some non-numeric versions:
develop system myfavoritebranch
joined

The joined representation of the version.

Example: >>> version = Version(‘1.2.3b’) >>> version.joined Version(‘123b’)

Returns:The version with separator characters removed
Return type:Version
lowest()
overlaps(a, b, *args, **kwargs)
satisfies(a, b, *args, **kwargs)

A Version ‘satisfies’ another if it is at least as specific and has a common prefix. e.g., we want gcc@4.7.3 to satisfy a request for gcc@4.7 so that when a user asks to build with gcc@4.7, we can find a suitable compiler.

underscored

The underscored representation of the version.

Example: >>> version = Version(‘1.2.3b’) >>> version.underscored Version(‘1_2_3b’)

Returns:
The version with separator characters replaced by
underscores
Return type:Version
union(a, b, *args, **kwargs)
up_to(index)

The version up to the specified component.

Examples: >>> version = Version(‘1.23-4b’) >>> version.up_to(1) Version(‘1’) >>> version.up_to(2) Version(‘1.23’) >>> version.up_to(3) Version(‘1.23-4’) >>> version.up_to(4) Version(‘1.23-4b’) >>> version.up_to(-1) Version(‘1.23-4’) >>> version.up_to(-2) Version(‘1.23’) >>> version.up_to(-3) Version(‘1’)

Returns:The first index components of the version
Return type:Version
class spack.version.VersionRange(start, end)

Bases: object

concrete
highest()
intersection(a, b, *args, **kwargs)
lowest()
overlaps(a, b, *args, **kwargs)
satisfies(a, b, *args, **kwargs)

A VersionRange satisfies another if some version in this range would satisfy some version in the other range. To do this it must either:

  1. Overlap with the other range
  2. The start of this range satisfies the end of the other range.

This is essentially the same as overlaps(), but overlaps assumes that its arguments are specific. That is, 4.7 is interpreted as 4.7.0.0.0.0… . This funciton assumes that 4.7 woudl be satisfied by 4.7.3.5, etc.

Rationale:

If a user asks for gcc@4.5:4.7, and a package is only compatible with gcc@4.7.3:4.8, then that package should be able to build under the constraints. Just using overlaps() would not work here.

Note that we don’t need to check whether the end of this range would satisfy the start of the other range, because overlaps() already covers that case.

Note further that overlaps() is a symmetric operation, while satisfies() is not.

union(a, b, *args, **kwargs)
class spack.version.VersionList(vlist=None)

Bases: object

Sorted, non-redundant list of Versions and VersionRanges.

add(version)
concrete
copy()
static from_dict(dictionary)

Parse dict from to_dict.

highest()

Get the highest version in the list.

intersect(a, b, *args, **kwargs)

Intersect this spec’s list with other.

Return True if the spec changed as a result; False otherwise

intersection(a, b, *args, **kwargs)
lowest()

Get the lowest version in the list.

overlaps(a, b, *args, **kwargs)
satisfies(a, b, *args, **kwargs)

A VersionList satisfies another if some version in the list would satisfy some version in the other list. This uses essentially the same algorithm as overlaps() does for VersionList, but it calls satisfies() on member Versions and VersionRanges.

If strict is specified, this version list must lie entirely within the other in order to satisfy it.

to_dict()

Generate human-readable dict for YAML.

union(a, b, *args, **kwargs)
update(a, b, *args, **kwargs)
spack.version.ver(obj)

Parses a Version, VersionRange, or VersionList from a string or list of strings.

Module contents

spack.run_before(*phases)

Registers a method of a package to be run before a given phase

spack.run_after(*phases)

Registers a method of a package to be run after a given phase

spack.on_package_attributes(**attr_dict)

Decorator: executes instance function only if object has attr valuses.

Executes the decorated method only if at the moment of calling the instance has attributes that are equal to certain values.

Parameters:attr_dict (dict) – dictionary mapping attribute names to their required values
class spack.Package(spec)

Bases: spack.package.PackageBase

General purpose class with a single install phase that needs to be coded by packagers.

build_system_class = 'Package'
phases = ['install']
class spack.MakefilePackage(spec)

Bases: spack.package.PackageBase

Specialized class for packages that are built using editable Makefiles

This class provides three phases that can be overridden:

  1. edit()
  2. build()
  3. install()

It is usually necessary to override the edit() phase, while build() and install() have sensible defaults. For a finer tuning you may override:

Method Purpose
build_targets Specify make targets for the build phase
install_targets Specify make targets for the install phase
build_directory() Directory where the Makefile is located
build(spec, prefix)

Calls make, passing build_targets as targets.

build_directory

Returns the directory containing the main Makefile

Returns:build directory
build_system_class = 'MakefilePackage'
build_targets = []
build_time_test_callbacks = ['check']
check()

Searches the Makefile for targets test and check and runs them if found.

edit(spec, prefix)

Edits the Makefile before calling make. This phase cannot be defaulted.

install(spec, prefix)

Calls make, passing install_targets as targets.

install_targets = ['install']
install_time_test_callbacks = ['installcheck']
installcheck()

Searches the Makefile for an installcheck target and runs it if found.

phases = ['edit', 'build', 'install']
class spack.AspellDictPackage(spec)

Bases: spack.build_systems.autotools.AutotoolsPackage

Specialized class for builing aspell dictionairies.

configure(spec, prefix)
patch()
class spack.AutotoolsPackage(spec)

Bases: spack.package.PackageBase

Specialized class for packages built using GNU Autotools.

This class provides four phases that can be overridden:

  1. autoreconf()
  2. configure()
  3. build()
  4. install()

They all have sensible defaults and for many packages the only thing necessary will be to override the helper method configure_args(). For a finer tuning you may also override:

Method Purpose
build_targets Specify make targets for the build phase
install_targets Specify make targets for the install phase
check() Run build time tests if required
autoreconf(spec, prefix)

Not needed usually, configure should be already there

autoreconf_extra_args = []
build(spec, prefix)

Makes the build targets specified by :py:attr:~.AutotoolsPackage.build_targets

build_directory

Override to provide another place to build the package

build_system_class = 'AutotoolsPackage'
build_targets = []
build_time_test_callbacks = ['check']
check()

Searches the Makefile for targets test and check and runs them if found.

configure(spec, prefix)

Runs configure with the arguments specified in configure_args() and an appropriately set prefix.

configure_abs_path
configure_args()

Produces a list containing all the arguments that must be passed to configure, except --prefix which will be pre-pended to the list.

Returns:list of arguments for configure
configure_directory

Returns the directory where ‘configure’ resides.

Returns:directory where to find configure
default_flag_handler(spack_env, flag_val)
delete_configure_to_force_update()
enable_or_disable(name, activation_value=None)

Same as with_or_without() but substitute with with enable and without with disable.

Parameters:
  • name (str) – name of a valid multi-valued variant
  • activation_value (callable) –

    if present accepts a single value and returns the parameter to be used leading to an entry of the type --enable-{name}={parameter}

    The special value ‘prefix’ can also be assigned and will return spec[name].prefix as activation parameter.

Returns:

list of arguments to configure

force_autoreconf = False
install(spec, prefix)

Makes the install targets specified by :py:attr:~.AutotoolsPackage.install_targets

install_targets = ['install']
install_time_test_callbacks = ['installcheck']
installcheck()

Searches the Makefile for an installcheck target and runs it if found.

patch_config_guess = True
phases = ['autoreconf', 'configure', 'build', 'install']
set_configure_or_die()

Checks the presence of a configure file after the autoreconf phase. If it is found sets a module attribute appropriately, otherwise raises an error.

Raises:RuntimeError – if a configure script is not found in configure_directory()
with_or_without(name, activation_value=None)

Inspects a variant and returns the arguments that activate or deactivate the selected feature(s) for the configure options.

This function works on all type of variants. For bool-valued variants it will return by default --with-{name} or --without-{name}. For other kinds of variants it will cycle over the allowed values and return either --with-{value} or --without-{value}.

If activation_value is given, then for each possible value of the variant, the option --with-{value}=activation_value(value) or --without-{value} will be added depending on whether or not variant=value is in the spec.

Parameters:
  • name (str) – name of a valid multi-valued variant
  • activation_value (callable) –

    callable that accepts a single value and returns the parameter to be used leading to an entry of the type --with-{name}={parameter}.

    The special value ‘prefix’ can also be assigned and will return spec[name].prefix as activation parameter.

Returns:

list of arguments to configure

class spack.CMakePackage(spec)

Bases: spack.package.PackageBase

Specialized class for packages built using CMake

For more information on the CMake build system, see: https://cmake.org/cmake/help/latest/

This class provides three phases that can be overridden:

  1. cmake()
  2. build()
  3. install()

They all have sensible defaults and for many packages the only thing necessary will be to override cmake_args(). For a finer tuning you may also override:

Method Purpose
root_cmakelists_dir() Location of the root CMakeLists.txt
build_directory() Directory where to build the package
build(spec, prefix)

Make the build targets

build_directory

Returns the directory to use when building the package

Returns:directory where to build the package
build_system_class = 'CMakePackage'
build_targets = []
build_time_test_callbacks = ['check']
check()

Searches the CMake-generated Makefile for the target test and runs it if found.

cmake(spec, prefix)

Runs cmake in the build directory

cmake_args()

Produces a list containing all the arguments that must be passed to cmake, except:

  • CMAKE_INSTALL_PREFIX
  • CMAKE_BUILD_TYPE

which will be set automatically.

Returns:list of arguments for cmake
default_flag_handler(spack_env, flag_val)
generator = 'Unix Makefiles'
install(spec, prefix)

Make the install targets

install_targets = ['install']
phases = ['cmake', 'build', 'install']
root_cmakelists_dir

The relative path to the directory containing CMakeLists.txt

This path is relative to the root of the extracted tarball, not to the build_directory. Defaults to the current directory.

Returns:directory containing CMakeLists.txt
std_cmake_args

Standard cmake arguments provided as a property for convenience of package writers

Returns:standard cmake arguments
class spack.QMakePackage(spec)

Bases: spack.package.PackageBase

Specialized class for packages built using qmake.

For more information on the qmake build system, see: http://doc.qt.io/qt-5/qmake-manual.html

This class provides three phases that can be overridden:

  1. qmake()
  2. build()
  3. install()

They all have sensible defaults and for many packages the only thing necessary will be to override qmake_args().

build(spec, prefix)

Make the build targets

build_system_class = 'QMakePackage'
build_time_test_callbacks = ['check']
check()

Searches the Makefile for a check: target and runs it if found.

install(spec, prefix)

Make the install targets

phases = ['qmake', 'build', 'install']
qmake(spec, prefix)

Run qmake to configure the project and generate a Makefile.

qmake_args()

Produces a list containing all the arguments that must be passed to qmake

class spack.SConsPackage(spec)

Bases: spack.package.PackageBase

Specialized class for packages built using SCons.

See http://scons.org/documentation.html for more information.

This class provides the following phases that can be overridden:

  1. build()
  2. install()

Packages that use SCons as a build system are less uniform than packages that use other build systems. Developers can add custom subcommands or variables that control the build. You will likely need to override build_args() to pass the appropriate variables.

build(spec, prefix)

Build the package.

build_args(spec, prefix)

Arguments to pass to build.

build_system_class = 'SConsPackage'
build_time_test_callbacks = ['test']
install(spec, prefix)

Install the package.

install_args(spec, prefix)

Arguments to pass to install.

phases = ['build', 'install']
test()

Run unit tests after build.

By default, does nothing. Override this if you want to add package-specific tests.

class spack.WafPackage(spec)

Bases: spack.package.PackageBase

Specialized class for packages that are built using the Waf build system. See https://waf.io/book/ for more information.

This class provides the following phases that can be overridden:

  • configure
  • build
  • install

These are all standard Waf commands and can be found by running:

$ python waf --help

Each phase provides a function <phase> that runs:

$ python waf -j<jobs> <phase>

where <jobs> is the number of parallel jobs to build with. Each phase also has a <phase_args> function that can pass arguments to this call. All of these functions are empty except for the configure_args function, which passes --prefix=/path/to/installation/prefix.

build(spec, prefix)

Executes the build.

build_args()

Arguments to pass to build.

build_directory

The directory containing the waf file.

build_system_class = 'WafPackage'
build_time_test_callbacks = ['test']
configure(spec, prefix)

Configures the project.

configure_args()

Arguments to pass to configure.

install(spec, prefix)

Installs the targets on the system.

install_args()

Arguments to pass to install.

install_time_test_callbacks = ['installtest']
installtest()

Run unit tests after install.

By default, does nothing. Override this if you want to add package-specific tests.

phases = ['configure', 'build', 'install']
python(*args, **kwargs)

The python Executable.

test()

Run unit tests after build.

By default, does nothing. Override this if you want to add package-specific tests.

waf(*args, **kwargs)

Runs the waf Executable.

class spack.PythonPackage(spec)

Bases: spack.package.PackageBase

Specialized class for packages that are built using Python setup.py files

This class provides the following phases that can be overridden:

  • build
  • build_py
  • build_ext
  • build_clib
  • build_scripts
  • clean
  • install
  • install_lib
  • install_headers
  • install_scripts
  • install_data
  • sdist
  • register
  • bdist
  • bdist_dumb
  • bdist_rpm
  • bdist_wininst
  • upload
  • check

These are all standard setup.py commands and can be found by running:

$ python setup.py --help-commands

By default, only the ‘build’ and ‘install’ phases are run, but if you need to run more phases, simply modify your phases list like so:

phases = ['build_ext', 'install', 'bdist']

Each phase provides a function <phase> that runs:

$ python setup.py --no-user-cfg <phase>

Each phase also has a <phase_args> function that can pass arguments to this call. All of these functions are empty except for the install_args function, which passes --prefix=/path/to/installation/directory.

If you need to run a phase which is not a standard setup.py command, you’ll need to define a function for it like so:

def configure(self, spec, prefix):
    self.setup_py('configure')
bdist(spec, prefix)

Create a built (binary) distribution.

bdist_args(spec, prefix)

Arguments to pass to bdist.

bdist_dumb(spec, prefix)

Create a “dumb” built distribution.

bdist_dumb_args(spec, prefix)

Arguments to pass to bdist_dumb.

bdist_rpm(spec, prefix)

Create an RPM distribution.

bdist_rpm_args(spec, prefix)

Arguments to pass to bdist_rpm.

bdist_wininst(spec, prefix)

Create an executable installer for MS Windows.

bdist_wininst_args(spec, prefix)

Arguments to pass to bdist_wininst.

build(spec, prefix)

Build everything needed to install.

build_args(spec, prefix)

Arguments to pass to build.

build_clib(spec, prefix)

Build C/C++ libraries used by Python extensions.

build_clib_args(spec, prefix)

Arguments to pass to build_clib.

build_directory

The directory containing the setup.py file.

build_ext(spec, prefix)

Build C/C++ extensions (compile/link to build directory).

build_ext_args(spec, prefix)

Arguments to pass to build_ext.

build_py(spec, prefix)

“Build” pure Python modules (copy to build directory).

build_py_args(spec, prefix)

Arguments to pass to build_py.

build_scripts(spec, prefix)

“Build” scripts (copy and fixup #! line).

build_system_class = 'PythonPackage'
build_time_test_callbacks = ['test']
check(spec, prefix)

Perform some checks on the package.

check_args(spec, prefix)

Arguments to pass to check.

clean(spec, prefix)

Clean up temporary files from ‘build’ command.

clean_args(spec, prefix)

Arguments to pass to clean.

import_module_test()

Attempts to import the module that was just installed.

This test is only run if the package overrides import_modules with a list of module names.

import_modules = []
install(spec, prefix)

Install everything from build directory.

install_args(spec, prefix)

Arguments to pass to install.

install_data(spec, prefix)

Install data files.

install_data_args(spec, prefix)

Arguments to pass to install_data.

install_headers(spec, prefix)

Install C/C++ header files.

install_headers_args(spec, prefix)

Arguments to pass to install_headers.

install_lib(spec, prefix)

Install all Python modules (extensions and pure Python).

install_lib_args(spec, prefix)

Arguments to pass to install_lib.

install_scripts(spec, prefix)

Install scripts (Python or otherwise).

install_scripts_args(spec, prefix)

Arguments to pass to install_scripts.

install_time_test_callbacks = ['import_module_test']
phases = ['build', 'install']
python(*args, **kwargs)
register(spec, prefix)

Register the distribution with the Python package index.

register_args(spec, prefix)

Arguments to pass to register.

sdist(spec, prefix)

Create a source distribution (tarball, zip file, etc.).

sdist_args(spec, prefix)

Arguments to pass to sdist.

setup_file()

Returns the name of the setup file to use.

setup_py(*args, **kwargs)
test()

Run unit tests after in-place build.

These tests are only run if the package actually has a ‘test’ command.

test_args(spec, prefix)

Arguments to pass to test.

upload(spec, prefix)

Upload binary package to PyPI.

upload_args(spec, prefix)

Arguments to pass to upload.

class spack.RPackage(spec)

Bases: spack.package.PackageBase

Specialized class for packages that are built using R.

For more information on the R build system, see: https://stat.ethz.ch/R-manual/R-devel/library/utils/html/INSTALL.html

This class provides a single phase that can be overridden:

  1. install()

It has sensible defaults, and for many packages the only thing necessary will be to add dependencies

build_system_class = 'RPackage'
configure_args()

Arguments to pass to install via --configure-args.

configure_vars()

Arguments to pass to install via --configure-vars.

install(spec, prefix)

Installs an R package.

phases = ['install']
class spack.PerlPackage(spec)

Bases: spack.package.PackageBase

Specialized class for packages that are built using Perl.

This class provides four phases that can be overridden if required:

  1. configure()
  2. build()
  3. check()
  4. install()
The default methods use, in order of preference:
  1. Makefile.PL,
  2. Build.PL.

Some packages may need to override configure_args(), which produces a list of arguments for configure(). Arguments should not include the installation base directory.

build(spec, prefix)

Builds a Perl package.

build_system_class = 'PerlPackage'
build_time_test_callbacks = ['check']
check()

Runs built-in tests of a Perl package.

configure(spec, prefix)

Runs Makefile.PL or Build.PL with arguments consisting of an appropriate installation base directory followed by the list returned by configure_args().

Raises:RuntimeError – if neither Makefile.PL or Build.PL exist
configure_args()

Produces a list containing the arguments that must be passed to configure(). Arguments should not include the installation base directory, which is prepended automatically.

Returns:list of arguments for Makefile.PL or Build.PL
install(spec, prefix)

Installs a Perl package.

phases = ['configure', 'build', 'install']
class spack.IntelPackage(spec)

Bases: spack.package.PackageBase

Specialized class for licensed Intel software.

This class provides two phases that can be overridden:

  1. configure()
  2. install()

They both have sensible defaults and for many packages the only thing necessary will be to override setup_environment to set the appropriate environment variables.

build_system_class = 'IntelPackage'
components = ['ALL']
configure(spec, prefix)

Writes the silent.cfg file used to configure the installation.

See https://software.intel.com/en-us/articles/configuration-file-format

global_license_file

Returns the path where a global license file should be stored.

All Intel software shares the same license, so we store it in a common ‘intel’ directory.

install(spec, prefix)

Runs the install.sh installation script.

license_comment = '#'
license_files = ['Licenses/license.lic']
license_required = True
license_url = 'https://software.intel.com/en-us/articles/intel-license-manager-faq'
license_vars = ['INTEL_LICENSE_FILE']
phases = ['configure', 'install']
save_silent_cfg()

Copies the silent.cfg configuration file to <prefix>/.spack.

class spack.Version(string)

Bases: object

Class to represent versions

concrete
dashed

The dashed representation of the version.

Example: >>> version = Version(‘1.2.3b’) >>> version.dashed Version(‘1-2-3b’)

Returns:The version with separator characters replaced by dashes
Return type:Version
dotted

The dotted representation of the version.

Example: >>> version = Version(‘1-2-3b’) >>> version.dotted Version(‘1.2.3b’)

Returns:The version with separator characters replaced by dots
Return type:Version
highest()
intersection(a, b, *args, **kwargs)
is_predecessor(other)

True if the other version is the immediate predecessor of this one. That is, NO versions v exist such that: (self < v < other and v not in self).

is_successor(other)
isdevelop()

Triggers on the special case of the @develop version.

isnumeric()

Tells if this version is numeric (vs. a non-numeric version). A version will be numeric as long as the first section of it is, even if it contains non-numerica portions.

Some numeric versions:
1 1.1 1.1a 1.a.1b
Some non-numeric versions:
develop system myfavoritebranch
joined

The joined representation of the version.

Example: >>> version = Version(‘1.2.3b’) >>> version.joined Version(‘123b’)

Returns:The version with separator characters removed
Return type:Version
lowest()
overlaps(a, b, *args, **kwargs)
satisfies(a, b, *args, **kwargs)

A Version ‘satisfies’ another if it is at least as specific and has a common prefix. e.g., we want gcc@4.7.3 to satisfy a request for gcc@4.7 so that when a user asks to build with gcc@4.7, we can find a suitable compiler.

underscored

The underscored representation of the version.

Example: >>> version = Version(‘1.2.3b’) >>> version.underscored Version(‘1_2_3b’)

Returns:
The version with separator characters replaced by
underscores
Return type:Version
union(a, b, *args, **kwargs)
up_to(index)

The version up to the specified component.

Examples: >>> version = Version(‘1.23-4b’) >>> version.up_to(1) Version(‘1’) >>> version.up_to(2) Version(‘1.23’) >>> version.up_to(3) Version(‘1.23-4’) >>> version.up_to(4) Version(‘1.23-4b’) >>> version.up_to(-1) Version(‘1.23-4’) >>> version.up_to(-2) Version(‘1.23’) >>> version.up_to(-3) Version(‘1’)

Returns:The first index components of the version
Return type:Version
spack.ver(obj)

Parses a Version, VersionRange, or VersionList from a string or list of strings.

class spack.Spec(spec_like, **kwargs)

Bases: object

cformat(*args, **kwargs)

Same as format, but color defaults to auto instead of False.

colorized()
common_dependencies(other)

Return names of dependencies that self an other have in common.

concrete

A spec is concrete if it describes a single build of a package.

More formally, a spec is concrete if concretize() has been called on it and it has been marked _concrete.

Concrete specs either can be or have been built. All constraints have been resolved, optional dependencies have been added or removed, a compiler has been chosen, and all variants have values.

concretize()

A spec is concrete if it describes one build of a package uniquely. This will ensure that this spec is concrete.

If this spec could describe more than one version, variant, or build of a package, this will add constraints to make it concrete.

Some rigorous validation and checks are also performed on the spec. Concretizing ensures that it is self-consistent and that it’s consistent with requirements of its pacakges. See flatten() and normalize() for more details on this.

It also ensures that:

for x in self.traverse():
    assert x.package.spec == x

which may not be true during the concretization step.

concretized()

This is a non-destructive version of concretize(). First clones, then returns a concrete version of this package without modifying this package.

constrain(other, deps=True)

Merge the constraints of other with self.

Returns True if the spec changed as a result, False if not.

constrained(other, deps=True)

Return a constrained copy without modifying this spec.

copy(deps=True, **kwargs)

Make a copy of this spec.

Parameters:
  • deps (bool or tuple) – Defaults to True. If boolean, controls whether dependencies are copied (copied if True). If a tuple is provided, only dependencies of types matching those in the tuple are copied.
  • kwargs – additional arguments for internal use (passed to _dup).
Returns:

A copy of this spec.

Examples

Deep copy with dependnecies:

spec.copy()
spec.copy(deps=True)

Shallow copy (no dependencies):

spec.copy(deps=False)

Only build and run dependencies:

deps=('build', 'run'):
cshort_spec

Returns an auto-colorized version of self.short_spec.

dag_hash(length=None)

Return a hash of the entire spec DAG, including connectivity.

dag_hash_bit_prefix(bits)

Get the first <bits> bits of the DAG hash as an integer type.

dep_difference(other)

Returns dependencies in self that are not in other.

dep_string()
dependencies(deptype='all')
dependencies_dict(deptype='all')
dependents(deptype='all')
dependents_dict(deptype='all')
eq_dag(other, deptypes=True)

True if the full dependency DAGs of specs are equal.

eq_node(other)

Equality with another spec, not including dependencies.

external
flat_dependencies(**kwargs)

Return a DependencyMap containing all of this spec’s dependencies with their constraints merged.

If copy is True, returns merged copies of its dependencies without modifying the spec it’s called on.

If copy is False, clears this spec’s dependencies and returns them.

format(format_string='$_$@$%@+$+$=', **kwargs)

Prints out particular pieces of a spec, depending on what is in the format string.

The format strings you can provide are:

$_   Package name
$.   Full package name (with namespace)
$@   Version with '@' prefix
$%   Compiler with '%' prefix
$%@  Compiler with '%' prefix & compiler version with '@' prefix
$%+  Compiler with '%' prefix & compiler flags prefixed by name
$%@+ Compiler, compiler version, and compiler flags with same
     prefixes as above
$+   Options
$=   Architecture prefixed by 'arch='
$/   7-char prefix of DAG hash with '-' prefix
$$   $

You can also use full-string versions, which elide the prefixes:

${PACKAGE}       Package name
${VERSION}       Version
${COMPILER}      Full compiler string
${COMPILERNAME}  Compiler name
${COMPILERVER}   Compiler version
${COMPILERFLAGS} Compiler flags
${OPTIONS}       Options
${ARCHITECTURE}  Architecture
${SHA1}          Dependencies 8-char sha1 prefix
${HASH:len}      DAG hash with optional length specifier

${SPACK_ROOT}    The spack root directory
${SPACK_INSTALL} The default spack install directory,
                 ${SPACK_PREFIX}/opt
${PREFIX}        The package prefix

Note these are case-insensitive: for example you can specify either ${PACKAGE} or ${package}.

Optionally you can provide a width, e.g. $20_ for a 20-wide name. Like printf, you can provide ‘-‘ for left justification, e.g. $-20_ for a left-justified name.

Anything else is copied verbatim into the output stream.

Parameters:
  • format_string (str) – string containing the format to be expanded
  • **kwargs (dict) –

    the following list of keywords is supported

    • color (bool): True if returned string is colored
    • transform (dict): maps full-string formats to a callable that accepts a string and returns another one

Examples

The following line:

s = spec.format('$_$@$+')

translates to the name, version, and options of the package, but no dependencies, arch, or compiler.

TODO: allow, e.g., $6# to customize short hash length TODO: allow, e.g., $// for full hash.

static from_dict(data)

Construct a spec from YAML.

Parameters: data – a nested dict/list data structure read from YAML or JSON.

static from_json(stream)

Construct a spec from JSON.

Parameters: stream – string or file object to read from.

static from_literal(spec_dict, normal=True)

Builds a Spec from a dictionary containing the spec literal.

The dictionary must have a single top level key, representing the root, and as many secondary level keys as needed in the spec.

The keys can be either a string or a Spec or a tuple containing the Spec and the dependency types.

Parameters:
  • spec_dict (dict) – the dictionary containing the spec literal
  • normal (bool) – if True the same key appearing at different levels of the spec_dict will map to the same object in memory.

Examples

A simple spec foo with no dependencies:

{'foo': None}

A spec foo with a (build, link) dependency bar:

{'foo':
    {'bar:build,link': None}}

A spec with a diamond dependency and various build types:

{'dt-diamond': {
    'dt-diamond-left:build,link': {
        'dt-diamond-bottom:build': None
    },
    'dt-diamond-right:build,link': {
        'dt-diamond-bottom:build,link,run': None
    }
}}

The same spec with a double copy of dt-diamond-bottom and no diamond structure:

{'dt-diamond': {
    'dt-diamond-left:build,link': {
        'dt-diamond-bottom:build': None
    },
    'dt-diamond-right:build,link': {
        'dt-diamond-bottom:build,link,run': None
    }
}, normal=False}

Constructing a spec using a Spec object as key:

mpich = Spec('mpich')
libelf = Spec('libelf@1.8.11')
expected_normalized = Spec.from_literal({
    'mpileaks': {
        'callpath': {
            'dyninst': {
                'libdwarf': {libelf: None},
                libelf: None
            },
            mpich: None
        },
        mpich: None
    },
})
static from_node_dict(node)
static from_yaml(stream)

Construct a spec from YAML.

Parameters: stream – string or file object to read from.

fullname
get_dependency(name)
index(deptype='all')

Return DependencyMap that points to all the dependencies in this spec.

static is_virtual(name)

Test if a name is virtual without requiring a Spec.

ne_dag(other, deptypes=True)

True if the full dependency DAGs of specs are not equal.

ne_node(other)

Inequality with another spec, not including dependencies.

normalize(force=False)

When specs are parsed, any dependencies specified are hanging off the root, and ONLY the ones that were explicitly provided are there. Normalization turns a partial flat spec into a DAG, where:

  1. Known dependencies of the root package are in the DAG.
  2. Each node’s dependencies dict only contains its known direct deps.
  3. There is only ONE unique spec for each package in the DAG.
    • This includes virtual packages. If there a non-virtual package that provides a virtual package that is in the spec, then we replace the virtual package with the non-virtual one.

TODO: normalize should probably implement some form of cycle detection, to ensure that the spec is actually a DAG.

normalized()

Return a normalized copy of this spec without modifying this spec.

package
package_class

Internal package call gets only the class object for a package. Use this to just get package metadata.

patches

Return patch objects for any patch sha256 sums on this Spec.

This is for use after concretization to iterate over any patches associated with this spec.

TODO: this only checks in the package; it doesn’t resurrect old patches from install directories, but it probably should.

prefix
static read_yaml_dep_specs(dependency_dict)

Read the DependencySpec portion of a YAML-formatted Spec.

This needs to be backward-compatible with older spack spec formats so that reindex will work on old specs/databases.

root

Follow dependent links and find the root of this spec’s DAG.

Spack specs have a single root (the package being installed).

satisfies(other, deps=True, strict=False, strict_deps=False)

Determine if this spec satisfies all constraints of another.

There are two senses for satisfies:

  • loose (default): the absence of a constraint in self implies that it could be satisfied by other, so we only check that there are no conflicts with other for constraints that this spec actually has.
  • strict: strict means that we must meet all the constraints specified on other.
satisfies_dependencies(other, strict=False)

This checks constraints on common dependencies against each other.

short_spec

Returns a version of the spec with the dependencies hashed instead of completely enumerated.

sorted_deps()

Return a list of all dependencies sorted by name.

to_dict()
to_json(stream=None)
to_node_dict()
to_yaml(stream=None)
traverse(**kwargs)
traverse_edges(visited=None, d=0, deptype='all', dep_spec=None, **kwargs)

Generic traversal of the DAG represented by this spec. This will yield each node in the spec. Options:

order [=pre|post]

Order to traverse spec nodes. Defaults to preorder traversal. Options are:

‘pre’: Pre-order traversal; each node is yielded before its
children in the dependency DAG.
‘post’: Post-order traversal; each node is yielded after its
children in the dependency DAG.
cover [=nodes|edges|paths]

Determines how extensively to cover the dag. Possible values:

‘nodes’: Visit each node in the dag only once. Every node
yielded by this function will be unique.
‘edges’: If a node has been visited once but is reached along a
new path from the root, yield it but do not descend into it. This traverses each ‘edge’ in the DAG once.
‘paths’: Explore every unique path reachable from the root.
This descends into visited subtrees and will yield nodes twice if they’re reachable by multiple paths.
depth [=False]
Defaults to False. When True, yields not just nodes in the spec, but also their depth from the root in a (depth, node) tuple.
key [=id]
Allow a custom key function to track the identity of nodes in the traversal.
root [=True]
If False, this won’t yield the root node, just its descendents.
direction [=children|parents]
If ‘children’, does a traversal of this spec’s children. If ‘parents’, traverses upwards in the DAG towards the root.
tree(**kwargs)

Prints out this spec and its dependencies, tree-formatted with indentation.

validate_or_raise()

Checks that names and values in this spec are real. If they’re not, it will raise an appropriate exception.

version
virtual

Right now, a spec is virtual if no package exists with its name.

TODO: revisit this – might need to use a separate namespace and be more explicit about this. Possible idea: just use conventin and make virtual deps all caps, e.g., MPI vs mpi.

virtual_dependencies()

Return list of any virtual deps in this spec.

class spack.when(spec)

Bases: object

This annotation lets packages declare multiple versions of methods like install() that depend on the package’s spec. For example:

class SomePackage(Package):
    ...

    def install(self, prefix):
        # Do default install

    @when('arch=chaos_5_x86_64_ib')
    def install(self, prefix):
        # This will be executed instead of the default install if
        # the package's platform() is chaos_5_x86_64_ib.

    @when('arch=bgqos_0")
    def install(self, prefix):
        # This will be executed if the package's sys_type is bgqos_0

This allows each package to have a default version of install() AND specialized versions for particular platforms. The version that is called depends on the architecutre of the instantiated package.

Note that this works for methods other than install, as well. So, if you only have part of the install that is platform specific, you could do this:

class SomePackage(Package):
    ...
    # virtual dependence on MPI.
    # could resolve to mpich, mpich2, OpenMPI
    depends_on('mpi')

    def setup(self):
        # do nothing in the default case
        pass

    @when('^openmpi')
    def setup(self):
        # do something special when this is built with OpenMPI for
        # its MPI implementations.


    def install(self, prefix):
        # Do common install stuff
        self.setup()
        # Do more common install stuff

There must be one (and only one) @when clause that matches the package’s spec. If there is more than one, or if none match, then the method will raise an exception when it’s called.

Note that the default version of decorated methods must always come first. Otherwise it will override all of the platform-specific versions. There’s not much we can do to get around this because of the way decorators work.

class spack.FileFilter(*filenames)

Bases: object

Convenience class for calling filter_file a lot.

filter(regex, repl, **kwargs)
class spack.FileList(files)

Bases: _abcoll.Sequence

Sequence of absolute paths to files.

Provides a few convenience methods to manipulate file paths.

basenames

Stable de-duplication of the base-names in the list

>>> l = LibraryList(['/dir1/liba.a', '/dir2/libb.a', '/dir3/liba.a'])
>>> l.basenames
['liba.a', 'libb.a']
>>> h = HeaderList(['/dir1/a.h', '/dir2/b.h', '/dir3/a.h'])
>>> h.basenames
['a.h', 'b.h']
Returns:A list of base-names
Return type:list of strings
directories

Stable de-duplication of the directories where the files reside.

>>> l = LibraryList(['/dir1/liba.a', '/dir2/libb.a', '/dir1/libc.a'])
>>> l.directories
['/dir1', '/dir2']
>>> h = HeaderList(['/dir1/a.h', '/dir1/b.h', '/dir2/c.h'])
>>> h.directories
['/dir1', '/dir2']
Returns:A list of directories
Return type:list of strings
joined(separator=' ')
class spack.HeaderList(files)

Bases: llnl.util.filesystem.FileList

Sequence of absolute paths to headers.

Provides a few convenience methods to manipulate header paths and get commonly used compiler flags or names.

add_macro(macro)

Add a macro definition

Parameters:macro (str) – The macro to add
cpp_flags

Include flags + macro definitions

>>> h = HeaderList(['/dir1/a.h', '/dir1/b.h', '/dir2/c.h'])
>>> h.cpp_flags
'-I/dir1 -I/dir2'
>>> h.add_macro('-DBOOST_DYN_LINK')
>>> h.cpp_flags
'-I/dir1 -I/dir2 -DBOOST_DYN_LINK'
Returns:A joined list of include flags and macro definitions
Return type:str
headers

Stable de-duplication of the headers.

Returns:A list of header files
Return type:list of strings
include_flags

Include flags

>>> h = HeaderList(['/dir1/a.h', '/dir1/b.h', '/dir2/c.h'])
>>> h.include_flags
'-I/dir1 -I/dir2'
Returns:A joined list of include flags
Return type:str
macro_definitions

Macro definitions

>>> h = HeaderList(['/dir1/a.h', '/dir1/b.h', '/dir2/c.h'])
>>> h.add_macro('-DBOOST_LIB_NAME=boost_regex')
>>> h.add_macro('-DBOOST_DYN_LINK')
>>> h.macro_definitions
'-DBOOST_LIB_NAME=boost_regex -DBOOST_DYN_LINK'
Returns:A joined list of macro definitions
Return type:str
names

Stable de-duplication of header names in the list without extensions

>>> h = HeaderList(['/dir1/a.h', '/dir2/b.h', '/dir3/a.h'])
>>> h.names
['a', 'b']
Returns:A list of files without extensions
Return type:list of strings
class spack.LibraryList(files)

Bases: llnl.util.filesystem.FileList

Sequence of absolute paths to libraries

Provides a few convenience methods to manipulate library paths and get commonly used compiler flags or names

ld_flags

Search flags + link flags

>>> l = LibraryList(['/dir1/liba.a', '/dir2/libb.a', '/dir1/liba.so'])
>>> l.ld_flags
'-L/dir1 -L/dir2 -la -lb'
Returns:A joined list of search flags and link flags
Return type:str
libraries

Stable de-duplication of library files.

Returns:A list of library files
Return type:list of strings
link_flags

Link flags for the libraries

>>> l = LibraryList(['/dir1/liba.a', '/dir2/libb.a', '/dir1/liba.so'])
>>> l.link_flags
'-la -lb'
Returns:A joined list of link flags
Return type:str
names

Stable de-duplication of library names in the list

>>> l = LibraryList(['/dir1/liba.a', '/dir2/libb.a', '/dir3/liba.so'])
>>> l.names
['a', 'b']
Returns:A list of library names
Return type:list of strings
search_flags

Search flags for the libraries

>>> l = LibraryList(['/dir1/liba.a', '/dir2/libb.a', '/dir1/liba.so'])
>>> l.search_flags
'-L/dir1 -L/dir2'
Returns:A joined list of search flags
Return type:str
spack.ancestor(dir, n=1)

Get the nth ancestor of a directory.

spack.can_access(file_name)

True if we have read/write access to the file.

spack.change_sed_delimiter(old_delim, new_delim, *filenames)

Find all sed search/replace commands and change the delimiter.

e.g., if the file contains seds that look like 's///', you can call change_sed_delimiter('/', '@', file) to change the delimiter to '@'.

Note that this routine will fail if the delimiter is ' or ". Handling those is left for future work.

Parameters:
  • old_delim (str) – The delimiter to search for
  • new_delim (str) – The delimiter to replace with
  • *filenames – One or more files to search and replace
spack.copy_mode(src, dest)

Set the mode of dest to that of src unless it is a link.

spack.filter_file(regex, repl, *filenames, **kwargs)

Like sed, but uses python regular expressions.

Filters every line of each file through regex and replaces the file with a filtered version. Preserves mode of filtered files.

As with re.sub, repl can be either a string or a callable. If it is a callable, it is passed the match object and should return a suitable replacement string. If it is a string, it can contain \1, \2, etc. to represent back-substitution as sed would allow.

Parameters:
  • regex (str) – The regular expression to search for
  • repl (str) – The string to replace matches with
  • *filenames – One or more files to search and replace
Keyword Arguments:
 
  • string (bool) – Treat regex as a plain string. Default it False
  • backup (bool) – Make backup file(s) suffixed with ~. Default is True
  • ignore_absent (bool) – Ignore any files that don’t exist. Default is False
spack.find(root, files, recurse=True)

Search for files starting from the root directory.

Like GNU/BSD find but written entirely in Python.

Examples:

$ find /usr -name python

is equivalent to:

>>> find('/usr', 'python')
$ find /usr/local/bin -maxdepth 1 -name python

is equivalent to:

>>> find('/usr/local/bin', 'python', recurse=False)

Accepts any glob characters accepted by fnmatch:

Pattern Meaning
matches everything
? matches any single character
[seq] matches any character in seq
[!seq] matches any character not in seq
Parameters:
  • root (str) – The root directory to start searching from
  • files (str or collections.Sequence) – Library name(s) to search for
  • recurse (bool, optional) – if False search only root folder, if True descends top-down from the root. Defaults to True.
Returns:

The files that have been found

Return type:

list of strings

spack.find_headers(headers, root, recurse=False)

Returns an iterable object containing a list of full paths to headers if found.

Accepts any glob characters accepted by fnmatch:

Pattern Meaning
matches everything
? matches any single character
[seq] matches any character in seq
[!seq] matches any character not in seq
Parameters:
  • headers (str or list of str) – Header name(s) to search for
  • root (str) – The root directory to start searching from
  • recurses (bool, optional) – if False search only root folder, if True descends top-down from the root. Defaults to False.
Returns:

The headers that have been found

Return type:

HeaderList

spack.find_libraries(libraries, root, shared=True, recurse=False)

Returns an iterable of full paths to libraries found in a root dir.

Accepts any glob characters accepted by fnmatch:

Pattern Meaning
matches everything
? matches any single character
[seq] matches any character in seq
[!seq] matches any character not in seq
Parameters:
  • libraries (str or list of str) – Library name(s) to search for
  • root (str) – The root directory to start searching from
  • shared (bool, optional) – if True searches for shared libraries, otherwise for static. Defaults to True.
  • recurse (bool, optional) – if False search only root folder, if True descends top-down from the root. Defaults to False.
Returns:

The libraries that have been found

Return type:

LibraryList

spack.find_system_libraries(libraries, shared=True)

Searches the usual system library locations for libraries.

Search order is as follows:

  1. /lib64
  2. /lib
  3. /usr/lib64
  4. /usr/lib
  5. /usr/local/lib64
  6. /usr/local/lib

Accepts any glob characters accepted by fnmatch:

Pattern Meaning
matches everything
? matches any single character
[seq] matches any character in seq
[!seq] matches any character not in seq
Parameters:
  • libraries (str or list of str) – Library name(s) to search for
  • shared (bool, optional) – if True searches for shared libraries, otherwise for static. Defaults to True.
Returns:

The libraries that have been found

Return type:

LibraryList

spack.fix_darwin_install_name(path)

Fix install name of dynamic libraries on Darwin to have full path.

There are two parts of this task:

  1. Use install_name('-id', ...) to change install name of a single lib
  2. Use install_name('-change', ...) to change the cross linking between libs. The function assumes that all libraries are in one folder and currently won’t follow subfolders.
Parameters:path (str) – directory in which .dylib files are located
spack.force_remove(*paths)

Remove files without printing errors. Like rm -f, does NOT remove directories.

spack.force_symlink(src, dest)
spack.hide_files(*args, **kwds)
spack.install(src, dest)

Manually install a file to a particular location.

spack.install_tree(src, dest, **kwargs)

Manually install a directory tree to a particular location.

spack.is_exe(path)

True if path is an executable file.

spack.join_path(prefix, *args)
spack.mkdirp(*paths)

Creates a directory, as well as parent directories if needed.

spack.remove_dead_links(root)

Removes any dead link that is present in root.

Parameters:root (str) – path where to search for dead links
spack.remove_if_dead_link(path)

Removes the argument if it is a dead link.

Parameters:path (str) – The potential dead link
spack.remove_linked_tree(path)

Removes a directory and its contents.

If the directory is a symlink, follows the link and removes the real directory before removing the link.

Parameters:path (str) – Directory to be removed
spack.set_executable(path)
spack.set_install_permissions(path)

Set appropriate permissions on the installed file.

spack.touch(path)

Creates an empty file at the specified path.

spack.touchp(path)

Like touch, but creates any parent directories needed for the file.

spack.traverse_tree(source_root, dest_root, rel_path='', **kwargs)

Traverse two filesystem trees simultaneously.

Walks the LinkTree directory in pre or post order. Yields each file in the source directory with a matching path from the dest directory, along with whether the file is a directory. e.g., for this tree:

root/
  a/
    file1
    file2
  b/
    file3

When called on dest, this yields:

('root',         'dest')
('root/a',       'dest/a')
('root/a/file1', 'dest/a/file1')
('root/a/file2', 'dest/a/file2')
('root/b',       'dest/b')
('root/b/file3', 'dest/b/file3')
Keyword Arguments:
 
  • order (str) – Whether to do pre- or post-order traversal. Accepted values are ‘pre’ and ‘post’
  • ignore (str) – Predicate indicating which files to ignore
  • follow_nonexisting (bool) – Whether to descend into directories in src that do not exit in dest. Default is True
  • follow_links (bool) – Whether to descend into symlinks in src
spack.unset_executable_mode(path)
spack.working_dir(*args, **kwds)
spack.version(*args, **kwargs)

Adds a version and metadata describing how to fetch it. Metadata is just stored as a dict in the package’s versions dictionary. Package must turn it into a valid fetch strategy later.

spack.conflicts(*args, **kwargs)

Allows a package to define a conflict.

Currently, a “conflict” is a concretized configuration that is known to be non-valid. For example, a package that is known not to be buildable with intel compilers can declare:

conflicts('%intel')

To express the same constraint only when the ‘foo’ variant is activated:

conflicts('%intel', when='+foo')
Parameters:
  • conflict_spec (Spec) – constraint defining the known conflict
  • when (Spec) – optional constraint that triggers the conflict
  • msg (str) – optional user defined message
spack.depends_on(*args, **kwargs)

Creates a dict of deps with specs defining when they apply.

Parameters:
  • spec (Spec or str) – the package and constraints depended on
  • when (Spec or str) – when the dependent satisfies this, it has the dependency represented by spec
  • type (str or tuple of str) – str or tuple of legal Spack deptypes
  • patches (obj or list) – single result of patch() directive, a str to be passed to patch, or a list of these

This directive is to be used inside a Package definition to declare that the package requires other packages to be built first. @see The section “Dependency specs” in the Spack Packaging Guide.

spack.extends(*args, **kwargs)

Same as depends_on, but dependency is symlinked into parent prefix.

This is for Python and other language modules where the module needs to be installed into the prefix of the Python installation. Spack handles this by installing modules into their own prefix, but allowing ONE module version to be symlinked into a parent Python install at a time.

keyword arguments can be passed to extends() so that extension packages can pass parameters to the extendee’s extension mechanism.

spack.provides(*args, **kwargs)

Allows packages to provide a virtual dependency. If a package provides ‘mpi’, other packages can declare that they depend on “mpi”, and spack can use the providing package to satisfy the dependency.

spack.patch(*args, **kwargs)

Packages can declare patches to apply to source. You can optionally provide a when spec to indicate that a particular patch should only be applied when the package’s spec meets certain conditions (e.g. a particular version).

Parameters:
  • url_or_filename (str) – url or filename of the patch
  • level (int) – patch level (as in the patch shell command)
  • when (Spec) – optional anonymous spec that specifies when to apply the patch
  • working_dir (str) – dir to change to before applying
Keyword Arguments:
 
  • sha256 (str) – sha256 sum of the patch, used to verify the patch (only required for URL patches)
  • archive_sha256 (str) – sha256 sum of the archive, if the patch is compressed (only required for compressed URL patches)
spack.variant(*args, **kwargs)

Define a variant for the package. Packager can specify a default value as well as a text description.

Parameters:
  • name (str) – name of the variant
  • default (str or bool) – default value for the variant, if not specified otherwise the default will be False for a boolean variant and ‘nothing’ for a multi-valued variant
  • description (str) – description of the purpose of the variant
  • values (tuple or callable) – either a tuple of strings containing the allowed values, or a callable accepting one value and returning True if it is valid
  • multi (bool) – if False only one value per spec is allowed for this variant
  • validator (callable) – optional group validator to enforce additional logic. It receives a tuple of values and should raise an instance of SpackError if the group doesn’t meet the additional constraints
spack.resource(*args, **kwargs)

Define an external resource to be fetched and staged when building the package. Based on the keywords present in the dictionary the appropriate FetchStrategy will be used for the resource. Resources are fetched and staged in their own folder inside spack stage area, and then moved into the stage area of the package that needs them.

List of recognized keywords:

  • ‘when’ : (optional) represents the condition upon which the resource is needed
  • ‘destination’ : (optional) path where to move the resource. This path must be relative to the main package stage area.
  • ‘placement’ : (optional) gives the possibility to fine tune how the resource is moved into the main package stage area.
class spack.Executable(name)

Bases: object

Class representing a program that can be run on the command line.

add_default_arg(arg)

Add a default argument to the command.

add_default_env(key, value)

Set an environment variable when the command is run.

Parameters:
  • key – The environment variable to set
  • value – The value to set it to
command

The command-line string.

Returns:The executable and default arguments
Return type:str
name

The executable name.

Returns:The basename of the executable
Return type:str
path

The path to the executable.

Returns:The path to the executable
Return type:str
spack.which(*args, **kwargs)

Finds an executable in the path like command-line which.

If given multiple executables, returns the first one that is found. If no executables are found, returns None.

Parameters:

*args (str) – One or more executables to search for

Keyword Arguments:
 
  • path (list() or str) – The path to search. Defaults to PATH
  • required (bool) – If set to True, raise an error if executable not found
Returns:

The first executable that is found in the path

Return type:

Executable

exception spack.ProcessError(message, long_message=None)

Bases: spack.error.SpackError

ProcessErrors are raised when Executables exit with an error code.

spack.install_dependency_symlinks(pkg, spec, prefix)

Execute a dummy install and flatten dependencies

spack.flatten_dependencies(spec, flat_dir)

Make each dependency of spec present in dir via symlink.

exception spack.DependencyConflictError(conflict)

Bases: spack.error.SpackError

Raised when the dependencies cannot be flattened as asked for.

exception spack.InstallError(message, long_msg=None)

Bases: spack.error.SpackError

Raised when something goes wrong during install or uninstall.

exception spack.ExternalPackageError(message, long_msg=None)

Bases: spack.package.InstallError

Raised by install() when a package is only for external use.