spack package

Subpackages

Submodules

spack.abi module

class spack.abi.ABI

Bases: object

This class provides methods to test ABI compatibility between specs. The current implementation is rather rough and could be improved.

architecture_compatible(parent, child)

Return true if parent and child have ABI compatible targets.

compatible(parent, child, **kwargs)

Returns true iff a parent and child spec are ABI compatible

compiler_compatible(parent, child, **kwargs)

Return true if compilers for parent and child are ABI compatible.

spack.architecture module

This module contains all the elements that are required to create an architecture object. These include, the target processor, the operating system, and the architecture platform (i.e. cray, darwin, linux, bgq, etc) classes.

On a multiple architecture machine, the architecture spec field can be set to build a package against any target and operating system that is present on the platform. On Cray platforms or any other architecture that has different front and back end environments, the operating system will determine the method of compiler detection.

There are two different types of compiler detection:
  1. Through the $PATH env variable (front-end detection)
  2. Through the tcl module system. (back-end detection)

Depending on which operating system is specified, the compiler will be detected using one of those methods.

For platforms such as linux and darwin, the operating system is autodetected and the target is set to be x86_64.

The command line syntax for specifying an architecture is as follows:

target=<Target name> os=<OperatingSystem name>

If the user wishes to use the defaults, either target or os can be left out of the command line and Spack will concretize using the default. These defaults are set in the ‘platforms/’ directory which contains the different subclasses for platforms. If the machine has multiple architectures, the user can also enter front-end, or fe or back-end or be. These settings will concretize to their respective front-end and back-end targets and operating systems. Additional platforms can be added by creating a subclass of Platform and adding it inside the platform directory.

Platforms are an abstract class that are extended by subclasses. If the user wants to add a new type of platform (such as cray_xe), they can create a subclass and set all the class attributes such as priority, front_target, back_target, front_os, back_os. Platforms also contain a priority class attribute. A lower number signifies higher priority. These numbers are arbitrarily set and can be changed though often there isn’t much need unless a new platform is added and the user wants that to be detected first.

Targets are created inside the platform subclasses. Most architecture (like linux, and darwin) will have only one target (x86_64) but in the case of Cray machines, there is both a frontend and backend processor. The user can specify which targets are present on front-end and back-end architecture

Depending on the platform, operating systems are either auto-detected or are set. The user can set the front-end and back-end operating setting by the class attributes front_os and back_os. The operating system as described earlier, will be responsible for compiler detection.

class spack.architecture.Arch(plat=None, os=None, target=None)

Bases: object

Architecture is now a class to help with setting attributes.

TODO: refactor so that we don’t need this class.

concrete
static from_dict(d)
to_dict()
exception spack.architecture.NoPlatformError

Bases: spack.error.SpackError

class spack.architecture.OperatingSystem(name, version)

Bases: object

Operating System will be like a class similar to platform extended by subclasses for the specifics. Operating System will contain the compiler finding logic. Instead of calling two separate methods to find compilers we call find_compilers method for each operating system

to_dict()
class spack.architecture.Platform(name)

Bases: object

Abstract class that each type of Platform will subclass. Will return a instance of it once it is returned.

add_operating_system(name, os_class)

Add the operating_system class object into the platform.operating_sys dictionary

add_target(name, target)

Used by the platform specific subclass to list available targets. Raises an error if the platform specifies a name that is reserved by spack as an alias.

back_end = None
back_os = None
default = None
default_os = None
classmethod detect()

Subclass is responsible for implementing this method. Returns True if the Platform class detects that it is the current platform and False if it’s not.

front_end = None
front_os = None
operating_system(name)
priority = None
reserved_oss = ['default_os', 'frontend', 'fe', 'backend', 'be']
reserved_targets = ['default_target', 'frontend', 'fe', 'backend', 'be']
classmethod setup_platform_environment(pkg, env)

Subclass can override this method if it requires any platform-specific build environment modifications.

target(name)

This is a getter method for the target dictionary that handles defaulting based on the values provided by default, front-end, and back-end. This can be overwritten by a subclass for which we want to provide further aliasing options.

class spack.architecture.Target(name, module_name=None)

Bases: object

static from_dict_or_value(dict_or_value)
name
optimization_flags(compiler)

Returns the flags needed to optimize for this target using the compiler passed as argument.

Parameters:compiler (CompilerSpec or Compiler) – object that contains both the name and the version of the compiler we want to use
to_dict_or_value()

Returns a dict or a value representing the current target.

String values are used to keep backward compatibility with generic targets, like e.g. x86_64 or ppc64. More specific micro-architectures will return a dictionary which contains information on the name, features, vendor, generation and parents of the current target.

spack.architecture.all_platforms()
spack.architecture.arch_for_spec(arch_spec)

Transforms the given architecture spec into an architecture object.

spack.architecture.compatible_sys_types()

Returns a list of all the systypes compatible with the current host.

spack.architecture.get_platform(platform_name)

Returns a platform object that corresponds to the given name.

spack.architecture.platform()

Detects the platform for this machine.

Gather a list of all available subclasses of platforms. Sorts the list according to their priority looking. Priority is an arbitrarily set number. Detects platform either using uname or a file path (/opt/cray…)

spack.architecture.sys_type()

Print out the “default” platform-os-target tuple for this machine.

On machines with only one target OS/target, prints out the platform-os-target for the frontend. For machines with a frontend and a backend, prints the default backend.

TODO: replace with use of more explicit methods to get all the backends, as client code should really be aware of cross-compiled architectures.

spack.architecture.verify_platform(platform_name)

Determines whether or not the platform with the given name is supported in Spack. For more information, see the ‘spack.platforms’ submodule.

spack.binary_distribution module

exception spack.binary_distribution.NewLayoutException(msg)

Bases: spack.error.SpackError

Raised if directory layout is different from buildcache.

exception spack.binary_distribution.NoChecksumException(message, long_message=None)

Bases: spack.error.SpackError

Raised if file fails checksum verification.

exception spack.binary_distribution.NoGpgException(msg)

Bases: spack.error.SpackError

Raised when gpg2 is not in PATH

exception spack.binary_distribution.NoKeyException(msg)

Bases: spack.error.SpackError

Raised when gpg has no default key added.

exception spack.binary_distribution.NoOverwriteException(file_path)

Bases: spack.error.SpackError

Raised when a file exists and must be overwritten.

exception spack.binary_distribution.NoVerifyException(message, long_message=None)

Bases: spack.error.SpackError

Raised if file fails signature verification.

exception spack.binary_distribution.PickKeyException(keys)

Bases: spack.error.SpackError

Raised when multiple keys can be used to sign.

spack.binary_distribution.build_cache_prefix(prefix)
spack.binary_distribution.build_cache_relative_path()
spack.binary_distribution.build_tarball(spec, outdir, force=False, rel=False, unsigned=False, allow_root=False, key=None, regenerate_index=False)

Build a tarball from given spec and put it into the directory structure used at the mirror (following <tarball_directory_name>).

spack.binary_distribution.buildinfo_file_name(prefix)

Filename of the binary package meta-data file

spack.binary_distribution.check_package_relocatable(workdir, spec, allow_root)

Check if package binaries are relocatable. Change links to placeholder links.

spack.binary_distribution.check_specs_against_mirrors(mirrors, specs, output_file=None, rebuild_on_errors=False)

Check all the given specs against buildcaches on the given mirrors and determine if any of the specs need to be rebuilt. Reasons for needing to rebuild include binary cache for spec isn’t present on a mirror, or it is present but the full_hash has changed since last time spec was built.

Parameters:
  • mirrors (dict) – Mirrors to check against
  • specs (iterable) – Specs to check against mirrors
  • output_file (string) – Path to output file to be written. If provided, mirrors with missing or out-of-date specs will be formatted as a JSON object and written to this file.
  • rebuild_on_errors (boolean) – Treat any errors encountered while checking specs as a signal to rebuild package.

Returns: 1 if any spec was out-of-date on any mirror, 0 otherwise.

spack.binary_distribution.checksum_tarball(file)
spack.binary_distribution.download_buildcache_entry(file_descriptions, mirror_url=None)
spack.binary_distribution.download_tarball(spec)

Download binary tarball for given package into stage area Return True if successful

spack.binary_distribution.extract_tarball(spec, filename, allow_root=False, unsigned=False, force=False)

extract binary tarball for given package into install area

spack.binary_distribution.generate_package_index(cache_prefix)

Create the build cache index page.

Creates (or replaces) the “index.json” page at the location given in cache_prefix. This page contains a link for each binary package (.yaml) and public key (.key) under cache_prefix.

spack.binary_distribution.get_keys(install=False, trust=False, force=False)

Get pgp public keys available on mirror with suffix .key or .pub

spack.binary_distribution.get_spec(spec=None, force=False)

Check if spec.yaml exists on mirrors and return it if it does

spack.binary_distribution.get_specs(allarch=False)

Get spec.yaml’s for build caches available on mirror

spack.binary_distribution.make_package_relative(workdir, spec, allow_root)

Change paths in binaries to relative paths. Change absolute symlinks to relative symlinks.

spack.binary_distribution.needs_rebuild(spec, mirror_url, rebuild_on_errors=False)
spack.binary_distribution.read_buildinfo_file(prefix)

Read buildinfo file

spack.binary_distribution.relocate_package(spec, allow_root)

Relocate the given package

spack.binary_distribution.sign_tarball(key, force, specfile_path)
spack.binary_distribution.tarball_directory_name(spec)

Return name of the tarball directory according to the convention <os>-<architecture>/<compiler>/<package>-<version>/

spack.binary_distribution.tarball_name(spec, ext)

Return the name of the tarfile according to the convention <os>-<architecture>-<package>-<dag_hash><ext>

spack.binary_distribution.tarball_path_name(spec, ext)

Return the full path+name for a given spec according to the convention <tarball_directory_name>/<tarball_name>

spack.binary_distribution.try_download_specs(urls=None, force=False)

Try to download the urls and cache them

spack.binary_distribution.write_buildinfo_file(spec, workdir, rel=False)

Create a cache file containing information required for the relocation

spack.build_environment module

This module contains all routines related to setting up the package build environment. All of this is set up by package.py just before install() is called.

There are two parts to the build environment:

  1. Python build environment (i.e. install() method)

    This is how things are set up when install() is called. Spack takes advantage of each package being in its own module by adding a bunch of command-like functions (like configure(), make(), etc.) in the package’s module scope. Ths allows package writers to call them all directly in Package.install() without writing ‘self.’ everywhere. No, this isn’t Pythonic. Yes, it makes the code more readable and more like the shell script from which someone is likely porting.

  2. Build execution environment

    This is the set of environment variables, like PATH, CC, CXX, etc. that control the build. There are also a number of environment variables used to pass information (like RPATHs and other information about dependencies) to Spack’s compiler wrappers. All of these env vars are also set up here.

Skimming this module is a nice way to get acquainted with the types of calls you can make from within the install() function.

exception spack.build_environment.ChildError(msg, module, classname, traceback_string, build_log, context)

Bases: spack.build_environment.InstallError

Special exception class for wrapping exceptions from child processes
in Spack’s build environment.

The main features of a ChildError are:

  1. They’re serializable, so when a child build fails, we can send one of these to the parent and let the parent report what happened.
  2. They have a traceback field containing a traceback generated on the child immediately after failure. Spack will print this on failure in lieu of trying to run sys.excepthook on the parent process, so users will see the correct stack trace from a child.
  3. They also contain context, which shows context in the Package implementation where the error happened. This helps people debug Python code in their packages. To get it, Spack searches the stack trace for the deepest frame where self is in scope and is an instance of PackageBase. This will generally find a useful spot in the package.py file.

The long_message of a ChildError displays one of two things:

  1. If the original error was a ProcessError, indicating a command died during the build, we’ll show context from the build log.
  2. If the original error was any other type of error, we’ll show context from the Python code.

SpackError handles displaying the special traceback if we’re in debug mode with spack -d.

build_errors = [('spack.util.executable', 'ProcessError')]
long_message
exception spack.build_environment.InstallError(message, long_message=None)

Bases: spack.error.SpackError

Raised by packages when a package fails to install.

Any subclass of InstallError will be annotated by Spack wtih a pkg attribute on failure, which the caller can use to get the package for which the exception was raised.

class spack.build_environment.MakeExecutable(name, jobs)

Bases: spack.util.executable.Executable

Special callable executable object for make so the user can specify parallelism options on a per-invocation basis. Specifying ‘parallel’ to the call will override whatever the package’s global setting is, so you can either default to true or false and override particular calls. Specifying ‘jobs_env’ to a particular call will name an environment variable which will be set to the parallelism level (without affecting the normal invocation with -j).

Note that if the SPACK_NO_PARALLEL_MAKE env var is set it overrides everything.

exception spack.build_environment.StopPhase(message, long_message=None)

Bases: spack.error.SpackError

Pickle-able exception to control stopped builds.

spack.build_environment.clean_environment()
spack.build_environment.fork(pkg, function, dirty, fake)

Fork a child process to do part of a spack build.

Parameters:
  • pkg (PackageBase) – package whose environment we should set up the forked process for.
  • function (callable) – argless function to run in the child process.
  • dirty (bool) – If True, do NOT clean the environment before building.
  • fake (bool) – If True, skip package setup b/c it’s not a real build

Usage:

def child_fun():
    # do stuff
build_env.fork(pkg, child_fun)

Forked processes are run with the build environment set up by spack.build_environment. This allows package authors to have full control over the environment, etc. without affecting other builds that might be executed in the same spack call.

If something goes wrong, the child process catches the error and passes it to the parent wrapped in a ChildError. The parent is expected to handle (or re-raise) the ChildError.

spack.build_environment.get_package_context(traceback, context=3)

Return some context for an error message when the build fails.

Parameters:
  • traceback (traceback) – A traceback from some exception raised during install
  • context (int) – Lines of context to show before and after the line where the error happened

This function inspects the stack to find where we failed in the package file, and it adds detailed context to the long_message from there.

spack.build_environment.get_rpath_deps(pkg)

Return immediate or transitive RPATHs depending on the package.

spack.build_environment.get_rpaths(pkg)

Get a list of all the rpaths for a package.

spack.build_environment.get_std_cmake_args(pkg)

List of standard arguments used if a package is a CMakePackage.

Returns:standard arguments that would be used if this package were a CMakePackage instance.
Return type:list of str
Parameters:pkg (PackageBase) – package under consideration
Returns:arguments for cmake
Return type:list of str
spack.build_environment.get_std_meson_args(pkg)

List of standard arguments used if a package is a MesonPackage.

Returns:standard arguments that would be used if this package were a MesonPackage instance.
Return type:list of str
Parameters:pkg (PackageBase) – package under consideration
Returns:arguments for meson
Return type:list of str
spack.build_environment.load_external_modules(pkg)

Traverse a package’s spec DAG and load any external modules.

Traverse a package’s dependencies and load any external modules associated with them.

Parameters:pkg (PackageBase) – package to load deps for
spack.build_environment.modifications_from_dependencies(spec, context)

Returns the environment modifications that are required by the dependencies of a spec and also applies modifications to this spec’s package at module scope, if need be.

Parameters:
  • spec (Spec) – spec for which we want the modifications
  • context (str) – either ‘build’ for build-time modifications or ‘run’ for run-time modifications
spack.build_environment.parent_class_modules(cls)

Get list of superclass modules that descend from spack.package.PackageBase

Includes cls.__module__

spack.build_environment.set_build_environment_variables(pkg, env, dirty)

Ensure a clean install environment when we build packages.

This involves unsetting pesky environment variables that may affect the build. It also involves setting environment variables used by Spack’s compiler wrappers.

Parameters:
  • pkg – The package we are building
  • env – The build environment
  • dirty (bool) – Skip unsetting the user’s environment settings
spack.build_environment.set_compiler_environment_variables(pkg, env)
spack.build_environment.set_module_variables_for_package(pkg)

Populate the module scope of install() with some useful functions. This makes things easier for package writers.

spack.build_environment.setup_package(pkg, dirty)

Execute all environment setup routines.

spack.caches module

Caches used by Spack to store data

class spack.caches.MirrorCache(root, skip_unstable_versions)

Bases: object

store(fetcher, relative_dest)

Fetch and relocate the fetcher’s target into our mirror cache.

Symlink a human readible path in our mirror to the actual storage location.

spack.caches.fetch_cache = <spack.fetch_strategy.FsCache object>

Spack’s local cache for downloaded source archives

spack.caches.misc_cache = <spack.util.file_cache.FileCache object>

Spack’s cache for small data

spack.ci module

class spack.ci.TemporaryDirectory

Bases: object

spack.ci.compute_spec_deps(spec_list)

Computes all the dependencies for the spec(s) and generates a JSON object which provides both a list of unique spec names as well as a comprehensive list of all the edges in the dependency graph. For example, given a single spec like ‘readline@7.0’, this function generates the following JSON object:

{
    "dependencies": [
        {
            "depends": "readline/ip6aiun",
            "spec": "readline/ip6aiun"
        },
        {
            "depends": "ncurses/y43rifz",
            "spec": "readline/ip6aiun"
        },
        {
            "depends": "ncurses/y43rifz",
            "spec": "readline/ip6aiun"
        },
        {
            "depends": "pkgconf/eg355zb",
            "spec": "ncurses/y43rifz"
        },
        {
            "depends": "pkgconf/eg355zb",
            "spec": "readline/ip6aiun"
        }
    ],
    "specs": [
        {
          "root_spec": "readline@7.0%apple-clang@9.1.0 arch=darwin-...",
          "spec": "readline@7.0%apple-clang@9.1.0 arch=darwin-highs...",
          "label": "readline/ip6aiun"
        },
        {
          "root_spec": "readline@7.0%apple-clang@9.1.0 arch=darwin-...",
          "spec": "ncurses@6.1%apple-clang@9.1.0 arch=darwin-highsi...",
          "label": "ncurses/y43rifz"
        },
        {
          "root_spec": "readline@7.0%apple-clang@9.1.0 arch=darwin-...",
          "spec": "pkgconf@1.5.4%apple-clang@9.1.0 arch=darwin-high...",
          "label": "pkgconf/eg355zb"
        }
    ]
}
spack.ci.configure_compilers(compiler_action, scope=None)
spack.ci.copy_stage_logs_to_artifacts(job_spec, job_log_dir)
spack.ci.find_matching_config(spec, ci_mappings)
spack.ci.format_job_needs(phase_name, strip_compilers, dep_jobs, osname, build_group, enable_artifacts_buildcache)
spack.ci.format_root_spec(spec, main_phase, strip_compiler)
spack.ci.generate_gitlab_ci_yaml(env, print_summary, output_file, custom_spack_repo=None, custom_spack_ref=None, run_optimizer=False, use_dependencies=False)
spack.ci.get_cdash_build_name(spec, build_group)
spack.ci.get_concrete_specs(root_spec, job_name, related_builds, compiler_action)
spack.ci.get_job_name(phase, strip_compiler, spec, osarch, build_group)
spack.ci.get_spec_dependencies(specs, deps, spec_labels)
spack.ci.get_spec_string(spec)
spack.ci.import_signing_key(base64_signing_key)
spack.ci.is_main_phase(phase_name)
spack.ci.pkg_name_from_spec_label(spec_label)
spack.ci.populate_buildgroup(job_names, group_name, project, site, credentials, cdash_url)
spack.ci.print_staging_summary(spec_labels, dependencies, stages)
spack.ci.push_mirror_contents(env, spec, yaml_path, mirror_url, build_id)
spack.ci.read_cdashid_from_mirror(spec, mirror_url)
spack.ci.register_cdash_build(build_name, base_url, project, site, track)
spack.ci.relate_cdash_builds(spec_map, cdash_base_url, job_build_id, cdash_project, cdashids_mirror_url)
spack.ci.spec_deps_key_label(s)
spack.ci.spec_matches(spec, match_string)
spack.ci.stage_spec_jobs(specs)
Take a set of release specs and generate a list of “stages”, where the
jobs in any stage are dependent only on jobs in previous stages. This allows us to maximize build parallelism within the gitlab-ci framework.
Parameters:specs (Iterable) – Specs to build
Returns: A tuple of information objects describing the specs, dependencies

and stages:

spec_labels: A dictionary mapping the spec labels which are made of
(pkg-name/hash-prefix), to objects containing “rootSpec” and “spec” keys. The root spec is the spec of which this spec is a dependency and the spec is the formatted spec string for this spec.
deps: A dictionary where the keys should also have appeared as keys in
the spec_labels dictionary, and the values are the set of dependencies for that spec.
stages: An ordered list of sets, each of which contains all the jobs to
built in that stage. The jobs are expressed in the same format as the keys in the spec_labels and deps objects.
spack.ci.url_encode_string(input_string)
spack.ci.write_cdashid_to_mirror(cdashid, spec, mirror_url)

spack.ci_needs_workaround module

spack.ci_needs_workaround.convert_job(job_entry)
spack.ci_needs_workaround.get_job_name(needs_entry)
spack.ci_needs_workaround.needs_to_dependencies(yaml)

spack.ci_optimization module

spack.ci_optimization.add_extends(yaml, key)

Modifies the given object “yaml” so that it includes an “extends” key whose value features “key”.

If “extends” is not in yaml, then yaml is modified such that yaml[“extends”] == key.

If yaml[“extends”] is a str, then yaml is modified such that yaml[“extends”] == [yaml[“extends”], key]

If yaml[“extends”] is a list that does not include key, then key is appended to the list.

Otherwise, yaml is left unchanged.

spack.ci_optimization.build_histogram(iterator, key)

Builds a histogram of values given an iterable of mappings and a key.

For each mapping “m” with key “key” in iterator, the value m[key] is considered.

Returns a list of tuples (hash, count, proportion, value), where

  • “hash” is a sha1sum hash of the value.
  • “count” is the number of occurences of values that hash to “hash”.
  • “proportion” is the proportion of all values considered above that hash to “hash”.
  • “value” is one of the values considered above that hash to “hash”. Which value is chosen when multiple values hash to the same “hash” is undefined.

The list is sorted in descending order by count, yielding the most frequently occuring hashes first.

spack.ci_optimization.common_subobject(yaml, sub)

Factor prototype object “sub” out of the values of mapping “yaml”.

Consider a modified copy of yaml, “new”, where for each key, “key” in yaml:

  • If yaml[key] matches sub, then new[key] = subkeys(yaml[key], sub).
  • Otherwise, new[key] = yaml[key].

If the above match criteria is not satisfied for any such key, then (yaml, None) is returned. The yaml object is returned unchanged.

Otherwise, each matching value in new is modified as in add_extends(new[key], common_key), and then new[common_key] is set to sub. The common_key value is chosen such that it does not match any preexisting key in new. In this case, (new, common_key) is returned.

spack.ci_optimization.matches(obj, proto)

Returns True if the test object “obj” matches the prototype object “proto”.

If obj and proto are mappings, obj matches proto if (key in obj) and (obj[key] matches proto[key]) for every key in proto.

If obj and proto are sequences, obj matches proto if they are of the same length and (a matches b) for every (a,b) in zip(obj, proto).

Otherwise, obj matches proto if obj == proto.

Precondition: proto must not have any reference cycles

spack.ci_optimization.optimizer(yaml)
spack.ci_optimization.print_delta(name, old, new, applied=None)
spack.ci_optimization.sort_yaml_obj(obj)
spack.ci_optimization.subkeys(obj, proto)

Returns the test mapping “obj” after factoring out the items it has in common with the prototype mapping “proto”.

Consider a recursive merge operation, merge(a, b) on mappings a and b, that returns a mapping, m, whose keys are the union of the keys of a and b, and for every such key, “k”, its corresponding value is:

  • merge(a[key], b[key]) if a[key] and b[key] are mappings, or
  • b[key] if (key in b) and not matches(a[key], b[key]),
    or
  • a[key] otherwise

If obj and proto are mappings, the returned object is the smallest object, “a”, such that merge(a, proto) matches obj.

Otherwise, obj is returned.

spack.ci_optimization.try_optimization_pass(name, yaml, optimization_pass, *args, **kwargs)

Try applying an optimization pass and return information about the result

“name” is a string describing the nature of the pass. If it is a non-empty string, summary statistics are also printed to stdout.

“yaml” is the object to apply the pass to.

“optimization_pass” is the function implementing the pass to be applied.

“args” and “kwargs” are the additional arguments to pass to optimization pass. The pass is applied as

>>> (new_yaml, *other_results) = optimization_pass(yaml, *args, **kwargs)

The pass’s results are greedily rejected if it does not modify the original yaml document, or if it produces a yaml document that serializes to a larger string.

Returns (new_yaml, yaml, applied, other_results) if applied, or (yaml, new_yaml, applied, other_results) otherwise.

spack.compiler module

class spack.compiler.Compiler(cspec, operating_system, target, paths, modules=None, alias=None, environment=None, extra_rpaths=None, enable_implicit_rpaths=None, **kwargs)

Bases: object

This class encapsulates a Spack “compiler”, which includes C, C++, and Fortran compilers. Subclasses should implement support for specific compilers, their possible names, arguments, and how to identify the particular type of compiler.

PrgEnv = None
PrgEnv_compiler = None
c11_flag
c99_flag
cc_names = []
cc_pic_flag

Returns the flag used by the C compiler to produce Position Independent Code (PIC).

cc_rpath_arg
classmethod cc_version(cc)
cxx11_flag
cxx14_flag
cxx17_flag
cxx98_flag
cxx_names = []
cxx_pic_flag

Returns the flag used by the C++ compiler to produce Position Independent Code (PIC).

cxx_rpath_arg
classmethod cxx_version(cxx)
debug_flags
classmethod default_version(cc)

Override just this to override all compiler version functions.

disable_new_dtags
enable_new_dtags
classmethod extract_version_from_output(output)

Extracts the version from compiler’s output.

f77_names = []
f77_pic_flag

Returns the flag used by the F77 compiler to produce Position Independent Code (PIC).

f77_rpath_arg
classmethod f77_version(f77)
fc_names = []
fc_pic_flag

Returns the flag used by the FC compiler to produce Position Independent Code (PIC).

fc_rpath_arg
classmethod fc_version(fc)
get_real_version()

Query the compiler for its version.

This is the “real” compiler version, regardless of what is in the compilers.yaml file, which the user can change to name their compiler.

Use the runtime environment of the compiler (modules and environment modifications) to enable the compiler to run properly on any platform.

ignore_version_errors = ()

Return values to ignore when invoking the compiler to get its version

implicit_rpaths()
linker_arg

Flag that need to be used to pass an argument to the linker.

openmp_flag
opt_flags
prefixes = []
required_libs

For executables created with this compiler, the compiler libraries that would be generally required to run it.

classmethod search_regexps(language)
setup_custom_environment(pkg, env)

Set any environment variables necessary to use the compiler.

suffixes = ['-.*']
verbose_flag

This property should be overridden in the compiler subclass if a verbose flag is available.

If it is not overridden, it is assumed to not be supported.

verify_executables()

Raise an error if any of the compiler executables is not valid.

This method confirms that for all of the compilers (cc, cxx, f77, fc) that have paths, those paths exist and are executable by the current user. Raises a CompilerAccessError if any of the non-null paths for the compiler are not accessible.

version
version_argument = '-dumpversion'

Compiler argument that produces version information

version_regex = '(.*)'

Regex used to extract version from compiler’s output

spack.concretize module

Functions here are used to take abstract specs and make them concrete. For example, if a spec asks for a version between 1.8 and 1.9, these functions might take will take the most recent 1.9 version of the package available. Or, if the user didn’t specify a compiler for a spec, then this will assign a compiler to the spec based on defaults or user preferences.

TODO: make this customizable and allow users to configure
concretization policies.
class spack.concretize.Concretizer(abstract_spec=None)

Bases: object

You can subclass this class to override some of the default concretization strategies, or you can override all of them.

adjust_target(spec)

Adjusts the target microarchitecture if the compiler is too old to support the default one.

Parameters:spec – spec to be concretized
Returns:True if spec was modified, False otherwise
check_for_compiler_existence = None

Controls whether we check that compiler versions actually exist during concretization. Used for testing and for mirror creation

choose_virtual_or_external(spec)

Given a list of candidate virtual and external packages, try to find one that is most ABI compatible.

concretize_architecture(spec)

If the spec is empty provide the defaults of the platform. If the architecture is not a string type, then check if either the platform, target or operating system are concretized. If any of the fields are changed then return True. If everything is concretized (i.e the architecture attribute is a namedtuple of classes) then return False. If the target is a string type, then convert the string into a concretized architecture. If it has no architecture and the root of the DAG has an architecture, then use the root otherwise use the defaults on the platform.

concretize_compiler(spec)

If the spec already has a compiler, we’re done. If not, then take the compiler used for the nearest ancestor with a compiler spec and use that. If the ancestor’s compiler is not concrete, then used the preferred compiler as specified in spackconfig.

Intuition: Use the spackconfig default if no package that depends on this one has a strict compiler requirement. Otherwise, try to build with the compiler that will be used by libraries that link to this one, to maximize compatibility.

concretize_compiler_flags(spec)

The compiler flags are updated to match those of the spec whose compiler is used, defaulting to no compiler flags in the spec. Default specs set at the compiler level will still be added later.

concretize_variants(spec)

If the spec already has variants filled in, return. Otherwise, add the user preferences from packages.yaml or the default variants from the package specification.

concretize_version(spec)

If the spec is already concrete, return. Otherwise take the preferred version from spackconfig, and default to the package’s version if there are no available versions.

TODO: In many cases we probably want to look for installed
versions of each package and use an installed version if we can link to it. The policy implemented here will tend to rebuild a lot of stuff becasue it will prefer a compiler in the spec to any compiler already- installed things were built with. There is likely some better policy that finds some middle ground between these two extremes.
target_from_package_preferences(spec)

Returns the preferred target from the package preferences if there’s any.

Parameters:spec – abstract spec to be concretized
exception spack.concretize.InsufficientArchitectureInfoError(spec, archs)

Bases: spack.error.SpackError

Raised when details on architecture cannot be collected from the system

exception spack.concretize.NoBuildError(spec)

Bases: spack.error.SpackError

Raised when a package is configured with the buildable option False, but no satisfactory external versions can be found

exception spack.concretize.NoCompilersForArchError(arch, available_os_targets)

Bases: spack.error.SpackError

exception spack.concretize.NoValidVersionError(spec)

Bases: spack.error.SpackError

Raised when there is no way to have a concrete version for a particular spec.

exception spack.concretize.UnavailableCompilerVersionError(compiler_spec, arch=None)

Bases: spack.error.SpackError

Raised when there is no available compiler that satisfies a compiler spec.

spack.concretize.concretize_specs_together(*abstract_specs)

Given a number of specs as input, tries to concretize them together.

Parameters:*abstract_specs – abstract specs to be concretized, given either as Specs or strings
Returns:List of concretized specs
spack.concretize.disable_compiler_existence_check()
spack.concretize.enable_compiler_existence_check()
spack.concretize.find_spec(spec, condition, default=None)

Searches the dag from spec in an intelligent order and looks for a spec that matches a condition

spack.config module

This module implements Spack’s configuration file handling.

This implements Spack’s configuration system, which handles merging multiple scopes with different levels of precedence. See the documentation on Configuration Scopes for details on how Spack’s configuration system behaves. The scopes are:

  1. default
  2. system
  3. site
  4. user

And corresponding per-platform scopes. Important functions in this module are:

  • get_config()
  • update_config()

get_config reads in YAML data for a particular scope and returns it. Callers can then modify the data and write it back with update_config.

When read in, Spack validates configurations with jsonschemas. The schemas are in submodules of spack.schema.

exception spack.config.ConfigError(message, long_message=None)

Bases: spack.error.SpackError

Superclass for all Spack config related errors.

exception spack.config.ConfigFileError(message, long_message=None)

Bases: spack.config.ConfigError

Issue reading or accessing a configuration file.

exception spack.config.ConfigFormatError(validation_error, data, filename=None, line=None)

Bases: spack.config.ConfigError

Raised when a configuration format does not match its schema.

class spack.config.ConfigScope(name, path)

Bases: object

This class represents a configuration scope.

A scope is one directory containing named configuration files. Each file is a config “section” (e.g., mirrors, compilers, etc).

clear()

Empty cached config information.

get_section(section)
get_section_filename(section)
write_section(section)
exception spack.config.ConfigSectionError(message, long_message=None)

Bases: spack.config.ConfigError

Error for referring to a bad config section name in a configuration.

class spack.config.Configuration(*scopes)

Bases: object

A full Spack configuration, from a hierarchy of config files.

This class makes it easy to add a new scope on top of an existing one.

clear_caches()

Clears the caches for configuration files,

This will cause files to be re-read upon the next request.

file_scopes

List of writable scopes with an associated file.

get(path, default=None, scope=None)

Get a config section or a single value from one.

Accepts a path syntax that allows us to grab nested config map entries. Getting the ‘config’ section would look like:

spack.config.get('config')

and the dirty section in the config scope would be:

spack.config.get('config:dirty')

We use : as the separator, like YAML objects.

get_config(section, scope=None)

Get configuration settings for a section.

If scope is None or not provided, return the merged contents of all of Spack’s configuration scopes. If scope is provided, return only the confiugration as specified in that scope.

This off the top-level name from the YAML section. That is, for a YAML config file that looks like this:

config:
  install_tree: $spack/opt/spack
  module_roots:
    lmod:   $spack/share/spack/lmod

get_config('config') will return:

{ 'install_tree': '$spack/opt/spack',
  'module_roots: {
      'lmod': '$spack/share/spack/lmod'
  }
}
get_config_filename(scope, section)

For some scope and section, get the name of the configuration file.

highest_precedence_non_platform_scope()

Non-internal non-platform scope with highest precedence

Platform-specific scopes are of the form scope/platform

highest_precedence_scope()

Non-internal scope with highest precedence.

matching_scopes(reg_expr)

List of all scopes whose names match the provided regular expression.

For example, matching_scopes(r’^command’) will return all scopes whose names begin with command.

pop_scope()

Remove the highest precedence scope and return it.

print_section(section, blame=False)

Print a configuration to stdout.

push_scope(scope)

Add a higher precedence scope to the Configuration.

remove_scope(scope_name)
set(path, value, scope=None)

Convenience function for setting single values in config files.

Accepts the path syntax described in get().

update_config(section, update_data, scope=None)

Update the configuration file for a particular scope.

Overwrites contents of a section in a scope with update_data, then writes out the config file.

update_data should have the top-level section name stripped off (it will be re-added). Data itself can be a list, dict, or any other yaml-ish structure.

class spack.config.ImmutableConfigScope(name, path)

Bases: spack.config.ConfigScope

A configuration scope that cannot be written to.

This is used for ConfigScopes passed on the command line.

write_section(section)
class spack.config.InternalConfigScope(name, data=None)

Bases: spack.config.ConfigScope

An internal configuration scope that is not persisted to a file.

This is for spack internal use so that command-line options and config file settings are accessed the same way, and Spack can easily override settings from files.

get_section(section)

Just reads from an internal dictionary.

get_section_filename(section)
write_section(section)

This only validates, as the data is already in memory.

class spack.config.SingleFileScope(name, path, schema, yaml_path=None)

Bases: spack.config.ConfigScope

This class represents a configuration scope in a single YAML file.

get_section(section)
get_section_filename(section)
write_section(section)
spack.config.command_line_scopes = []

configuration scopes added on the command line set by spack.main.main().

spack.config.config = <spack.config.Configuration object>

This is the singleton configuration instance for Spack.

spack.config.config_defaults = {'config': {'build_jobs': 2, 'build_stage': '$tempdir/spack-stage', 'checksum': True, 'connect_timeout': 10, 'debug': False, 'dirty': False, 'verify_ssl': True}}

Hard-coded default values for some key configuration options. This ensures that Spack will still work even if config.yaml in the defaults scope is removed.

spack.config.configuration_paths = (('defaults', '/home/docs/checkouts/readthedocs.org/user_builds/spack/checkouts/v0.15.3/lib/spack/docs/_spack_root/etc/spack/defaults'), ('system', '/etc/spack'), ('site', '/home/docs/checkouts/readthedocs.org/user_builds/spack/checkouts/v0.15.3/lib/spack/docs/_spack_root/etc/spack'), ('user', '/home/docs/.spack'))

Builtin paths to configuration files in Spack

spack.config.default_list_scope()

Return the config scope that is listed by default.

Commands that list configuration list all scopes (merged) by default.

spack.config.default_modify_scope(section='config')

Return the config scope that commands should modify by default.

Commands that modify configuration by default modify the highest priority scope.

Parameters:section (boolean) – Section for which to get the default scope. If this is not ‘compilers’, a general (non-platform) scope is used.
spack.config.first_existing(dictionary, keys)

Get the value of the first key in keys that is in the dictionary.

spack.config.get(path, default=None, scope=None)

Module-level wrapper for Configuration.get().

spack.config.get_valid_type(path)

Returns an instance of a type that will pass validation for path.

The instance is created by calling the constructor with no arguments. If multiple types will satisfy validation for data at the configuration path given, the priority order is list, dict, str, bool, int, float.

spack.config.merge_yaml(dest, source)

Merges source into dest; entries in source take precedence over dest.

This routine may modify dest and should be assigned to dest, in case dest was None to begin with, e.g.:

dest = merge_yaml(dest, source)

Config file authors can optionally end any attribute in a dict with :: instead of :, and the key will override that of the parent instead of merging.

spack.config.override(path_or_scope, value=None)

Simple way to override config settings within a context.

Parameters:
  • path_or_scope (ConfigScope or str) – scope or single option to override
  • value (object, optional) – value for the single option

Temporarily push a scope on the current configuration, then remove it after the context completes. If a single option is provided, create an internal config scope for it and push/pop that scope.

spack.config.overrides_base_name = 'overrides-'

Base name for the (internal) overrides scope.

spack.config.process_config_path(path)
spack.config.read_config_file(filename, schema=None)

Read a YAML configuration file.

User can provide a schema for validation. If no schema is provided, we will infer the schema from the top-level key.

spack.config.scopes()

Convenience function to get list of configuration scopes.

spack.config.scopes_metavar = '{defaults,system,site,user}[/PLATFORM]'

metavar to use for commands that accept scopes this is shorter and more readable than listing all choices

spack.config.section_schemas = {'compilers': {'$schema': 'http://json-schema.org/schema#', 'additionalProperties': False, 'properties': {'compilers': {'items': [{'type': 'object', 'additionalProperties': False, 'properties': {'compiler': {'type': 'object', 'additionalProperties': False, 'required': ['paths', 'spec', 'modules', 'operating_system'], 'properties': {'paths': {'type': 'object', 'required': ['cc', 'cxx', 'f77', 'fc'], 'additionalProperties': False, 'properties': {'cc': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cxx': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'f77': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'fc': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}}}, 'flags': {'type': 'object', 'additionalProperties': False, 'properties': {'cflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cxxflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'fflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cppflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'ldflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'ldlibs': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}}}, 'spec': {'type': 'string'}, 'operating_system': {'type': 'string'}, 'target': {'type': 'string'}, 'alias': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'modules': {'anyOf': [{'type': 'string'}, {'type': 'null'}, {'type': 'array'}]}, 'implicit_rpaths': {'anyOf': [{'type': 'array', 'items': {'type': 'string'}}, {'type': 'boolean'}]}, 'environment': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'set': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'unset': {'type': 'array', 'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'prepend_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'append_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'remove_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}}}, 'extra_rpaths': {'type': 'array', 'default': [], 'items': {'type': 'string'}}}}}}], 'type': 'array'}}, 'title': 'Spack compiler configuration file schema', 'type': 'object'}, 'config': {'$schema': 'http://json-schema.org/schema#', 'additionalProperties': False, 'properties': {'config': {'default': {}, 'properties': {'allow_sgid': {'type': 'boolean'}, 'build_jobs': {'minimum': 1, 'type': 'integer'}, 'build_language': {'type': 'string'}, 'build_stage': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}, 'ccache': {'type': 'boolean'}, 'checksum': {'type': 'boolean'}, 'connect_timeout': {'minimum': 0, 'type': 'integer'}, 'db_lock_timeout': {'minimum': 1, 'type': 'integer'}, 'debug': {'type': 'boolean'}, 'dirty': {'type': 'boolean'}, 'extensions': {'items': {'type': 'string'}, 'type': 'array'}, 'install_hash_length': {'minimum': 1, 'type': 'integer'}, 'install_missing_compilers': {'type': 'boolean'}, 'install_path_scheme': {'type': 'string'}, 'install_tree': {'type': 'string'}, 'locks': {'type': 'boolean'}, 'misc_cache': {'type': 'string'}, 'module_roots': {'additionalProperties': False, 'deprecatedProperties': {'error': False, 'message': 'specifying a "{property}" module root has no effect [support for {property} module files has been dropped]', 'properties': ['dotkit']}, 'properties': {'dotkit': {'type': 'string'}, 'lmod': {'type': 'string'}, 'tcl': {'type': 'string'}}, 'type': 'object'}, 'package_lock_timeout': {'anyOf': [{'type': 'integer', 'minimum': 1}, {'type': 'null'}]}, 'shared_linking': {'enum': ['rpath', 'runpath'], 'type': 'string'}, 'source_cache': {'type': 'string'}, 'suppress_gpg_warnings': {'type': 'boolean'}, 'template_dirs': {'items': {'type': 'string'}, 'type': 'array'}, 'verify_ssl': {'type': 'boolean'}}, 'type': 'object'}}, 'title': 'Spack core configuration file schema', 'type': 'object'}, 'mirrors': {'$schema': 'http://json-schema.org/schema#', 'additionalProperties': False, 'properties': {'mirrors': {'additionalProperties': False, 'default': {}, 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'object', 'required': ['fetch', 'push'], 'properties': {'fetch': {'type': 'string'}, 'push': {'type': 'string'}}}]}}, 'type': 'object'}}, 'title': 'Spack mirror configuration file schema', 'type': 'object'}, 'modules': {'$schema': 'http://json-schema.org/schema#', 'additionalProperties': False, 'properties': {'modules': {'additionalProperties': False, 'default': {}, 'deprecatedProperties': {'error': False, 'message': 'the section "{property}" in modules.yaml has no effect [support for {property} module files has been dropped]', 'properties': ['dotkit']}, 'properties': {'dotkit': {'allOf': [{'type': 'object', 'default': {}, 'allOf': [{'properties': {'verbose': {'type': 'boolean', 'default': False}, 'hash_length': {'type': 'integer', 'minimum': 0, 'default': 7}, 'whitelist': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'blacklist': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'blacklist_implicits': {'type': 'boolean', 'default': False}, 'naming_scheme': {'type': 'string'}, 'projections': {'type': 'object', 'default': {}, 'patternProperties': {'all|\\w[\\w-]*': {'type': 'string'}}}, 'all': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'filter': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'environment_blacklist': {'type': 'array', 'default': [], 'items': {'type': 'string'}}}}, 'template': {'type': 'string'}, 'autoload': {'type': 'string', 'enum': ['none', 'direct', 'all']}, 'prerequisites': {'type': 'string', 'enum': ['none', 'direct', 'all']}, 'conflict': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'load': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'suffixes': {'type': 'object', 'validate_spec': True, 'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}}, 'environment': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'set': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'unset': {'type': 'array', 'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'prepend_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'append_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'remove_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}}}}}}}, {'validate_spec': True, 'patternProperties': {'(?!hierarchy|core_specs|verbose|hash_length|whitelist|blacklist|projections|naming_scheme|core_compilers|all)(^\\w[\\w-]*)': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'filter': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'environment_blacklist': {'type': 'array', 'default': [], 'items': {'type': 'string'}}}}, 'template': {'type': 'string'}, 'autoload': {'type': 'string', 'enum': ['none', 'direct', 'all']}, 'prerequisites': {'type': 'string', 'enum': ['none', 'direct', 'all']}, 'conflict': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'load': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'suffixes': {'type': 'object', 'validate_spec': True, 'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}}, 'environment': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'set': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'unset': {'type': 'array', 'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'prepend_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'append_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'remove_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}}}}}, '^[\\^@%+~]': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'filter': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'environment_blacklist': {'type': 'array', 'default': [], 'items': {'type': 'string'}}}}, 'template': {'type': 'string'}, 'autoload': {'type': 'string', 'enum': ['none', 'direct', 'all']}, 'prerequisites': {'type': 'string', 'enum': ['none', 'direct', 'all']}, 'conflict': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'load': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'suffixes': {'type': 'object', 'validate_spec': True, 'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}}, 'environment': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'set': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'unset': {'type': 'array', 'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'prepend_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'append_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'remove_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}}}}}}}]}, {}]}, 'enable': {'default': [], 'deprecatedProperties': {'error': False, 'message': 'cannot enable "{property}" in modules.yaml [support for {property} module files has been dropped]', 'properties': ['dotkit']}, 'items': {'enum': ['tcl', 'dotkit', 'lmod'], 'type': 'string'}, 'type': 'array'}, 'lmod': {'allOf': [{'type': 'object', 'default': {}, 'allOf': [{'properties': {'verbose': {'type': 'boolean', 'default': False}, 'hash_length': {'type': 'integer', 'minimum': 0, 'default': 7}, 'whitelist': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'blacklist': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'blacklist_implicits': {'type': 'boolean', 'default': False}, 'naming_scheme': {'type': 'string'}, 'projections': {'type': 'object', 'default': {}, 'patternProperties': {'all|\\w[\\w-]*': {'type': 'string'}}}, 'all': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'filter': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'environment_blacklist': {'type': 'array', 'default': [], 'items': {'type': 'string'}}}}, 'template': {'type': 'string'}, 'autoload': {'type': 'string', 'enum': ['none', 'direct', 'all']}, 'prerequisites': {'type': 'string', 'enum': ['none', 'direct', 'all']}, 'conflict': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'load': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'suffixes': {'type': 'object', 'validate_spec': True, 'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}}, 'environment': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'set': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'unset': {'type': 'array', 'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'prepend_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'append_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'remove_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}}}}}}}, {'validate_spec': True, 'patternProperties': {'(?!hierarchy|core_specs|verbose|hash_length|whitelist|blacklist|projections|naming_scheme|core_compilers|all)(^\\w[\\w-]*)': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'filter': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'environment_blacklist': {'type': 'array', 'default': [], 'items': {'type': 'string'}}}}, 'template': {'type': 'string'}, 'autoload': {'type': 'string', 'enum': ['none', 'direct', 'all']}, 'prerequisites': {'type': 'string', 'enum': ['none', 'direct', 'all']}, 'conflict': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'load': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'suffixes': {'type': 'object', 'validate_spec': True, 'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}}, 'environment': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'set': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'unset': {'type': 'array', 'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'prepend_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'append_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'remove_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}}}}}, '^[\\^@%+~]': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'filter': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'environment_blacklist': {'type': 'array', 'default': [], 'items': {'type': 'string'}}}}, 'template': {'type': 'string'}, 'autoload': {'type': 'string', 'enum': ['none', 'direct', 'all']}, 'prerequisites': {'type': 'string', 'enum': ['none', 'direct', 'all']}, 'conflict': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'load': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'suffixes': {'type': 'object', 'validate_spec': True, 'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}}, 'environment': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'set': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'unset': {'type': 'array', 'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'prepend_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'append_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'remove_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}}}}}}}]}, {'type': 'object', 'properties': {'core_compilers': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'hierarchy': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'core_specs': {'type': 'array', 'default': [], 'items': {'type': 'string'}}}}]}, 'prefix_inspections': {'patternProperties': {'\\w[\\w-]*': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}, 'tcl': {'allOf': [{'type': 'object', 'default': {}, 'allOf': [{'properties': {'verbose': {'type': 'boolean', 'default': False}, 'hash_length': {'type': 'integer', 'minimum': 0, 'default': 7}, 'whitelist': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'blacklist': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'blacklist_implicits': {'type': 'boolean', 'default': False}, 'naming_scheme': {'type': 'string'}, 'projections': {'type': 'object', 'default': {}, 'patternProperties': {'all|\\w[\\w-]*': {'type': 'string'}}}, 'all': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'filter': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'environment_blacklist': {'type': 'array', 'default': [], 'items': {'type': 'string'}}}}, 'template': {'type': 'string'}, 'autoload': {'type': 'string', 'enum': ['none', 'direct', 'all']}, 'prerequisites': {'type': 'string', 'enum': ['none', 'direct', 'all']}, 'conflict': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'load': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'suffixes': {'type': 'object', 'validate_spec': True, 'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}}, 'environment': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'set': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'unset': {'type': 'array', 'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'prepend_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'append_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'remove_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}}}}}}}, {'validate_spec': True, 'patternProperties': {'(?!hierarchy|core_specs|verbose|hash_length|whitelist|blacklist|projections|naming_scheme|core_compilers|all)(^\\w[\\w-]*)': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'filter': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'environment_blacklist': {'type': 'array', 'default': [], 'items': {'type': 'string'}}}}, 'template': {'type': 'string'}, 'autoload': {'type': 'string', 'enum': ['none', 'direct', 'all']}, 'prerequisites': {'type': 'string', 'enum': ['none', 'direct', 'all']}, 'conflict': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'load': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'suffixes': {'type': 'object', 'validate_spec': True, 'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}}, 'environment': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'set': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'unset': {'type': 'array', 'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'prepend_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'append_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'remove_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}}}}}, '^[\\^@%+~]': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'filter': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'environment_blacklist': {'type': 'array', 'default': [], 'items': {'type': 'string'}}}}, 'template': {'type': 'string'}, 'autoload': {'type': 'string', 'enum': ['none', 'direct', 'all']}, 'prerequisites': {'type': 'string', 'enum': ['none', 'direct', 'all']}, 'conflict': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'load': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'suffixes': {'type': 'object', 'validate_spec': True, 'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}}, 'environment': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'set': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'unset': {'type': 'array', 'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'prepend_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'append_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'remove_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}}}}}}}]}, {}]}}, 'type': 'object'}}, 'title': 'Spack module file configuration file schema', 'type': 'object'}, 'packages': {'$schema': 'http://json-schema.org/schema#', 'additionalProperties': False, 'properties': {'packages': {'additionalProperties': False, 'default': {}, 'patternProperties': {'\\w[\\w-]*': {'additionalProperties': False, 'default': {}, 'properties': {'buildable': {'default': True, 'type': 'boolean'}, 'compiler': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'modules': {'default': {}, 'type': 'object'}, 'paths': {'default': {}, 'type': 'object'}, 'permissions': {'additionalProperties': False, 'properties': {'group': {'type': 'string'}, 'read': {'enum': ['user', 'group', 'world'], 'type': 'string'}, 'write': {'enum': ['user', 'group', 'world'], 'type': 'string'}}, 'type': 'object'}, 'providers': {'additionalProperties': False, 'default': {}, 'patternProperties': {'\\w[\\w-]*': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}, 'target': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'variants': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}, 'version': {'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}, 'type': 'array'}}, 'type': 'object'}}, 'type': 'object'}}, 'title': 'Spack package configuration file schema', 'type': 'object'}, 'repos': {'$schema': 'http://json-schema.org/schema#', 'additionalProperties': False, 'properties': {'repos': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'title': 'Spack repository configuration file schema', 'type': 'object'}, 'upstreams': {'$schema': 'http://json-schema.org/schema#', 'additionalProperties': False, 'properties': {'upstreams': {'default': {}, 'patternProperties': {'\\w[\\w-]*': {'additionalProperties': False, 'default': {}, 'properties': {'install_tree': {'type': 'string'}, 'modules': {'properties': {'lmod': {'type': 'string'}, 'tcl': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}}, 'title': 'Spack core configuration file schema', 'type': 'object'}}

Dict from section names -> schema for that section

spack.config.set(path, value, scope=None)

Convenience function for getting single values in config files.

Accepts the path syntax described in get().

spack.config.validate(data, schema, filename=None)

Validate data read in from a Spack YAML file.

Parameters:
  • data (dict or list) – data read from a Spack YAML file
  • schema (dict or list) – jsonschema to validate data

This leverages the line information (start_mark, end_mark) stored on Spack YAML structures.

spack.database module

Spack’s installation tracking database.

The database serves two purposes:

  1. It implements a cache on top of a potentially very large Spack directory hierarchy, speeding up many operations that would otherwise require filesystem access.
  2. It will allow us to track external installations as well as lost packages and their dependencies.

Prior to the implementation of this store, a directory layout served as the authoritative database of packages in Spack. This module provides a cache and a sanity checking mechanism for what is in the filesystem.

exception spack.database.CorruptDatabaseError(message, long_message=None)

Bases: spack.error.SpackError

Raised when errors are found while reading the database.

class spack.database.Database(root, db_dir=None, upstream_dbs=None, is_upstream=False, enable_transaction_locking=True, record_fields=['spec', 'ref_count', 'path', 'installed', 'explicit', 'installation_time', 'deprecated_for'])

Bases: object

Per-process lock objects for each install prefix.

activated_extensions_for(spec_like, *args, **kwargs)
add(spec_like, *args, **kwargs)
clear_all_failures()

Force remove install failure tracking files.

clear_failure(spec, force=False)

Remove any persistent and cached failure tracking for the spec.

see mark_failed().

Parameters:
  • spec (Spec) – the spec whose failure indicators are being removed
  • force (bool) – True if the failure information should be cleared when a prefix failure lock exists for the file or False if the failure should not be cleared (e.g., it may be associated with a concurrent build)
db_for_spec_hash(hash_key)
deprecate(spec_like, *args, **kwargs)
deprecator(spec)

Return the spec that the given spec is deprecated for, or None

get_by_hash(dag_hash, default=None, installed=<built-in function any>)

Look up a spec by DAG hash, or by a DAG hash prefix.

Parameters:
  • dag_hash (str) – hash (or hash prefix) to look up
  • default (object, optional) – default value to return if dag_hash is not in the DB (default: None)
  • (bool or any, or InstallStatus or iterable of (installed) – InstallStatus, optional): if True, includes only installed specs in the search; if False only missing specs, and if any, all specs in database. If an InstallStatus or iterable of InstallStatus, returns specs whose install status (installed, deprecated, or missing) matches (one of) the InstallStatus. (default: any)

installed defaults to any so that we can refer to any known hash. Note that query() and query_one() differ in that they only return installed specs by default.

Returns:a list of specs matching the hash or hash prefix
Return type:(list)
get_by_hash_local(*args, **kwargs)

Look up a spec in this DB by DAG hash, or by a DAG hash prefix.

Parameters:
  • dag_hash (str) – hash (or hash prefix) to look up
  • default (object, optional) – default value to return if dag_hash is not in the DB (default: None)
  • (bool or any, or InstallStatus or iterable of (installed) – InstallStatus, optional): if True, includes only installed specs in the search; if False only missing specs, and if any, all specs in database. If an InstallStatus or iterable of InstallStatus, returns specs whose install status (installed, deprecated, or missing) matches (one of) the InstallStatus. (default: any)

installed defaults to any so that we can refer to any known hash. Note that query() and query_one() differ in that they only return installed specs by default.

Returns:a list of specs matching the hash or hash prefix
Return type:(list)
get_record(spec_like, *args, **kwargs)
installed_extensions_for(spec_like, *args, **kwargs)
installed_relatives(spec_like, *args, **kwargs)
mark_failed(spec)

Mark a spec as failing to install.

Prefix failure marking takes the form of a byte range lock on the nth byte of a file for coordinating between concurrent parallel build processes and a persistent file, named with the full hash and containing the spec, in a subdirectory of the database to enable persistence across overlapping but separate related build processes.

The failure lock file, spack.store.db.prefix_failures, lives alongside the install DB. n is the sys.maxsize-bit prefix of the associated DAG hash to make the likelihood of collision very low with no cleanup required.

missing(spec)
prefix_failed(spec)

Return True if the prefix (installation) is marked as failed.

prefix_failure_locked(spec)

Return True if a process has a failure lock on the spec.

prefix_failure_marked(spec)

Determine if the spec has a persistent failure marking.

prefix_lock(spec, timeout=None)

Get a lock on a particular spec’s installation directory.

NOTE: The installation directory does not need to exist.

Prefix lock is a byte range lock on the nth byte of a file.

The lock file is spack.store.db.prefix_lock – the DB tells us what to call it and it lives alongside the install DB.

n is the sys.maxsize-bit prefix of the DAG hash. This makes likelihood of collision is very low AND it gives us readers-writer lock semantics with just a single lockfile, so no cleanup required.

prefix_read_lock(spec)
prefix_write_lock(spec)
query(*args, **kwargs)

Query the Spack database including all upstream databases.

Parameters:
  • query_spec – queries iterate through specs in the database and return those that satisfy the supplied query_spec. If query_spec is any, This will match all specs in the database. If it is a spec, we’ll evaluate spec.satisfies(query_spec)
  • known (bool or any, optional) – Specs that are “known” are those for which Spack can locate a package.py file – i.e., Spack “knows” how to install them. Specs that are unknown may represent packages that existed in a previous version of Spack, but have since either changed their name or been removed
  • (bool or any, or InstallStatus or iterable of (installed) – InstallStatus, optional): if True, includes only installed specs in the search; if False only missing specs, and if any, all specs in database. If an InstallStatus or iterable of InstallStatus, returns specs whose install status (installed, deprecated, or missing) matches (one of) the InstallStatus. (default: True)
  • explicit (bool or any, optional) – A spec that was installed following a specific user request is marked as explicit. If instead it was pulled-in as a dependency of a user requested spec it’s considered implicit.
  • start_date (datetime, optional) – filters the query discarding specs that have been installed before start_date.
  • end_date (datetime, optional) – filters the query discarding specs that have been installed after end_date.
  • hashes (container) – list or set of hashes that we can use to restrict the search
Returns:

list of specs that match the query

query_by_spec_hash(hash_key, data=None)
query_local(*args, **kwargs)

Query only the local Spack database.

Parameters:
  • query_spec – queries iterate through specs in the database and return those that satisfy the supplied query_spec. If query_spec is any, This will match all specs in the database. If it is a spec, we’ll evaluate spec.satisfies(query_spec)
  • known (bool or any, optional) – Specs that are “known” are those for which Spack can locate a package.py file – i.e., Spack “knows” how to install them. Specs that are unknown may represent packages that existed in a previous version of Spack, but have since either changed their name or been removed
  • (bool or any, or InstallStatus or iterable of (installed) – InstallStatus, optional): if True, includes only installed specs in the search; if False only missing specs, and if any, all specs in database. If an InstallStatus or iterable of InstallStatus, returns specs whose install status (installed, deprecated, or missing) matches (one of) the InstallStatus. (default: True)
  • explicit (bool or any, optional) – A spec that was installed following a specific user request is marked as explicit. If instead it was pulled-in as a dependency of a user requested spec it’s considered implicit.
  • start_date (datetime, optional) – filters the query discarding specs that have been installed before start_date.
  • end_date (datetime, optional) – filters the query discarding specs that have been installed after end_date.
  • hashes (container) – list or set of hashes that we can use to restrict the search
Returns:

list of specs that match the query

query_one(query_spec, known=<built-in function any>, installed=True)

Query for exactly one spec that matches the query spec.

Raises an assertion error if more than one spec matches the query. Returns None if no installed package matches.

read_transaction()

Get a read lock context manager for use in a with block.

reindex(directory_layout)

Build database index from scratch based on a directory layout.

Locks the DB if it isn’t locked already.

remove(spec_like, *args, **kwargs)
specs_deprecated_by(spec)

Return all specs deprecated in favor of the given spec

unused_specs

Return all the specs that are currently installed but not needed at runtime to satisfy user’s requests.

Specs in the return list are those which are not either:
  1. Installed on an explicit user request
  2. Installed as a “run” or “link” dependency (even transitive) of a spec at point 1.
write_transaction()

Get a write lock context manager for use in a with block.

class spack.database.ForbiddenLock

Bases: object

exception spack.database.ForbiddenLockError(message, long_message=None)

Bases: spack.error.SpackError

Raised when an upstream DB attempts to acquire a lock

class spack.database.InstallRecord(spec, path, installed, ref_count=0, explicit=False, installation_time=None, deprecated_for=None)

Bases: object

A record represents one installation in the DB.

The record keeps track of the spec for the installation, its install path, AND whether or not it is installed. We need the installed flag in case a user either:

  1. blew away a directory, or
  2. used spack uninstall -f to get rid of it

If, in either case, the package was removed but others still depend on it, we still need to track its spec, so we don’t actually remove from the database until a spec has no installed dependents left.

Parameters:
  • spec (Spec) – spec tracked by the install record
  • path (str) – path where the spec has been installed
  • installed (bool) – whether or not the spec is currently installed
  • ref_count (int) – number of specs that depend on this one
  • explicit (bool, optional) – whether or not this spec was explicitly installed, or pulled-in as a dependency of something else
  • installation_time (time, optional) – time of the installation
classmethod from_dict(spec, dictionary)
install_type_matches(installed)
to_dict(include_fields=['spec', 'ref_count', 'path', 'installed', 'explicit', 'installation_time', 'deprecated_for'])
class spack.database.InstallStatus

Bases: str

class spack.database.InstallStatuses

Bases: object

DEPRECATED = 'deprecated'
INSTALLED = 'installed'
MISSING = 'missing'
classmethod canonicalize(query_arg)
exception spack.database.InvalidDatabaseVersionError(expected, found)

Bases: spack.error.SpackError

exception spack.database.MissingDependenciesError(message, long_message=None)

Bases: spack.error.SpackError

Raised when DB cannot find records for dependencies

exception spack.database.NonConcreteSpecAddError(message, long_message=None)

Bases: spack.error.SpackError

Raised when attemptint to add non-concrete spec to DB.

exception spack.database.UpstreamDatabaseLockingError(message, long_message=None)

Bases: spack.error.SpackError

Raised when an operation would need to lock an upstream database

spack.database.nullcontext(*args, **kwargs)

spack.dependency module

Data structures that represent Spack’s dependency relationships.

class spack.dependency.Dependency(pkg, spec, type=('build', 'link'))

Bases: object

Class representing metadata for a dependency on a package.

This class differs from spack.spec.DependencySpec because it represents metadata at the Package level. spack.spec.DependencySpec is a descriptor for an actual package configuration, while Dependency is a descriptor for a package’s dependency requirements.

A dependency is a requirement for a configuration of another package that satisfies a particular spec. The dependency can have types, which determine how that package configuration is required, e.g. whether it is required for building the package, whether it needs to be linked to, or whether it is needed at runtime so that Spack can call commands from it.

A package can also depend on another package with patches. This is for cases where the maintainers of one package also maintain special patches for their dependencies. If one package depends on another with patches, a special version of that dependency with patches applied will be built for use by the dependent package. The patches are included in the new version’s spec hash to differentiate it from unpatched versions of the same package, so that unpatched versions of the dependency package can coexist with the patched version.

merge(other)

Merge constraints, deptypes, and patches of other into self.

name

Get the name of the dependency package.

spack.dependency.all_deptypes = ('build', 'link', 'run', 'test')

The types of dependency relationships that Spack understands.

spack.dependency.canonical_deptype(deptype)

Convert deptype to a canonical sorted tuple, or raise ValueError.

Parameters:deptype (str or list or tuple) – string representing dependency type, or a list/tuple of such strings. Can also be the builtin function all or the string ‘all’, which result in a tuple of all dependency types known to Spack.
spack.dependency.default_deptype = ('build', 'link')

Default dependency type if none is specified

spack.directives module

This package contains directives that can be used within a package.

Directives are functions that can be called inside a package definition to modify the package, for example:

class OpenMpi(Package):
depends_on(“hwloc”) provides(“mpi”) …

provides and depends_on are spack directives.

The available directives are:

  • conflicts
  • depends_on
  • extends
  • patch
  • provides
  • resource
  • variant
  • version
spack.directives.version(ver, checksum=None, **kwargs)

Adds a version and, if appropriate, metadata for fetching its code.

The version directives are aggregated into a versions dictionary attribute with Version keys and metadata values, where the metadata is stored as a dictionary of kwargs.

The dict of arguments is turned into a valid fetch strategy for code packages later. See spack.fetch_strategy.for_package_version().

spack.directives.conflicts(conflict_spec, when=None, msg=None)

Allows a package to define a conflict.

Currently, a “conflict” is a concretized configuration that is known to be non-valid. For example, a package that is known not to be buildable with intel compilers can declare:

conflicts('%intel')

To express the same constraint only when the ‘foo’ variant is activated:

conflicts('%intel', when='+foo')
Parameters:
  • conflict_spec (Spec) – constraint defining the known conflict
  • when (Spec) – optional constraint that triggers the conflict
  • msg (str) – optional user defined message
spack.directives.depends_on(spec, when=None, type=('build', 'link'), patches=None)

Creates a dict of deps with specs defining when they apply.

Parameters:
  • spec (Spec or str) – the package and constraints depended on
  • when (Spec or str) – when the dependent satisfies this, it has the dependency represented by spec
  • type (str or tuple of str) – str or tuple of legal Spack deptypes
  • patches (obj or list) – single result of patch() directive, a str to be passed to patch, or a list of these

This directive is to be used inside a Package definition to declare that the package requires other packages to be built first. @see The section “Dependency specs” in the Spack Packaging Guide.

spack.directives.extends(spec, **kwargs)

Same as depends_on, but allows symlinking into dependency’s prefix tree.

This is for Python and other language modules where the module needs to be installed into the prefix of the Python installation. Spack handles this by installing modules into their own prefix, but allowing ONE module version to be symlinked into a parent Python install at a time, using spack activate.

keyword arguments can be passed to extends() so that extension packages can pass parameters to the extendee’s extension mechanism.

spack.directives.provides(*specs, **kwargs)

Allows packages to provide a virtual dependency. If a package provides ‘mpi’, other packages can declare that they depend on “mpi”, and spack can use the providing package to satisfy the dependency.

spack.directives.patch(url_or_filename, level=1, when=None, working_dir='.', **kwargs)

Packages can declare patches to apply to source. You can optionally provide a when spec to indicate that a particular patch should only be applied when the package’s spec meets certain conditions (e.g. a particular version).

Parameters:
  • url_or_filename (str) – url or relative filename of the patch
  • level (int) – patch level (as in the patch shell command)
  • when (Spec) – optional anonymous spec that specifies when to apply the patch
  • working_dir (str) – dir to change to before applying
Keyword Arguments:
 
  • sha256 (str) – sha256 sum of the patch, used to verify the patch (only required for URL patches)
  • archive_sha256 (str) – sha256 sum of the archive, if the patch is compressed (only required for compressed URL patches)
spack.directives.variant(name, default=None, description='', values=None, multi=None, validator=None)

Define a variant for the package. Packager can specify a default value as well as a text description.

Parameters:
  • name (str) – name of the variant
  • default (str or bool) – default value for the variant, if not specified otherwise the default will be False for a boolean variant and ‘nothing’ for a multi-valued variant
  • description (str) – description of the purpose of the variant
  • values (tuple or callable) – either a tuple of strings containing the allowed values, or a callable accepting one value and returning True if it is valid
  • multi (bool) – if False only one value per spec is allowed for this variant
  • validator (callable) – optional group validator to enforce additional logic. It receives the package name, the variant name and a tuple of values and should raise an instance of SpackError if the group doesn’t meet the additional constraints
Raises:

DirectiveError – if arguments passed to the directive are invalid

spack.directives.resource(**kwargs)

Define an external resource to be fetched and staged when building the package. Based on the keywords present in the dictionary the appropriate FetchStrategy will be used for the resource. Resources are fetched and staged in their own folder inside spack stage area, and then moved into the stage area of the package that needs them.

List of recognized keywords:

  • ‘when’ : (optional) represents the condition upon which the resource is needed
  • ‘destination’ : (optional) path where to move the resource. This path must be relative to the main package stage area.
  • ‘placement’ : (optional) gives the possibility to fine tune how the resource is moved into the main package stage area.

spack.directory_layout module

class spack.directory_layout.DirectoryLayout(root)

Bases: object

A directory layout is used to associate unique paths with specs. Different installations are going to want differnet layouts for their install, and they can use this to customize the nesting structure of spack installs.

all_specs()

To be implemented by subclasses to traverse all specs for which there is a directory within the root.

check_installed(spec)

Checks whether a spec is installed.

Return the spec’s prefix, if it is installed, None otherwise.

Raise an exception if the install is inconsistent or corrupt.

create_install_directory(spec)

Creates the installation directory for a spec.

hidden_file_paths

Return a list of hidden files used by the directory layout.

Paths are relative to the root of an install directory.

If the directory layout uses no hidden files to maintain state, this should return an empty container, e.g. [] or (,).

path_for_spec(spec)

Return absolute path from the root to a directory for the spec.

relative_path_for_spec(spec)

Implemented by subclasses to return a relative path from the install root to a unique location for the provided spec.

remove_install_directory(spec, deprecated=False)

Removes a prefix and any empty parent directories from the root. Raised RemoveFailedError if something goes wrong.

exception spack.directory_layout.DirectoryLayoutError(message, long_msg=None)

Bases: spack.error.SpackError

Superclass for directory layout errors.

exception spack.directory_layout.ExtensionAlreadyInstalledError(spec, ext_spec)

Bases: spack.directory_layout.DirectoryLayoutError

Raised when an extension is added to a package that already has it.

exception spack.directory_layout.ExtensionConflictError(spec, ext_spec, conflict)

Bases: spack.directory_layout.DirectoryLayoutError

Raised when an extension is added to a package that already has it.

class spack.directory_layout.ExtensionsLayout(view, **kwargs)

Bases: object

A directory layout is used to associate unique paths with specs for package extensions. Keeps track of which extensions are activated for what package. Depending on the use case, this can mean globally activated extensions directly in the installation folder - or extensions activated in filesystem views.

add_extension(spec, ext_spec)

Add to the list of currently installed extensions.

check_activated(spec, ext_spec)

Ensure that ext_spec can be removed from spec.

If not, raise NoSuchExtensionError.

check_extension_conflict(spec, ext_spec)

Ensure that ext_spec can be activated in spec.

If not, raise ExtensionAlreadyInstalledError or ExtensionConflictError.

extendee_target_directory(extendee)

Specify to which full path extendee should link all files from extensions.

extension_map(spec)

Get a dict of currently installed extension packages for a spec.

Dict maps { name : extension_spec } Modifying dict does not affect internals of this layout.

remove_extension(spec, ext_spec)

Remove from the list of currently installed extensions.

exception spack.directory_layout.InconsistentInstallDirectoryError(message, long_msg=None)

Bases: spack.directory_layout.DirectoryLayoutError

Raised when a package seems to be installed to the wrong place.

exception spack.directory_layout.InstallDirectoryAlreadyExistsError(path)

Bases: spack.directory_layout.DirectoryLayoutError

Raised when create_install_directory is called unnecessarily.

exception spack.directory_layout.InvalidDirectoryLayoutParametersError(message, long_msg=None)

Bases: spack.directory_layout.DirectoryLayoutError

Raised when a invalid directory layout parameters are supplied

exception spack.directory_layout.InvalidExtensionSpecError(message, long_msg=None)

Bases: spack.directory_layout.DirectoryLayoutError

Raised when an extension file has a bad spec in it.

exception spack.directory_layout.NoSuchExtensionError(spec, ext_spec)

Bases: spack.directory_layout.DirectoryLayoutError

Raised when an extension isn’t there on deactivate.

exception spack.directory_layout.RemoveFailedError(installed_spec, prefix, error)

Bases: spack.directory_layout.DirectoryLayoutError

Raised when a DirectoryLayout cannot remove an install prefix.

exception spack.directory_layout.SpecHashCollisionError(installed_spec, new_spec)

Bases: spack.directory_layout.DirectoryLayoutError

Raised when there is a hash collision in an install layout.

exception spack.directory_layout.SpecReadError(message, long_msg=None)

Bases: spack.directory_layout.DirectoryLayoutError

Raised when directory layout can’t read a spec.

class spack.directory_layout.YamlDirectoryLayout(root, **kwargs)

Bases: spack.directory_layout.DirectoryLayout

By default lays out installation directories like this::
<install root>/
<platform-os-target>/
<compiler>-<compiler version>/
<name>-<version>-<hash>

The hash here is a SHA-1 hash for the full DAG plus the build spec. TODO: implement the build spec.

The installation directory scheme can be modified with the arguments hash_len and path_scheme.

all_deprecated_specs()
all_specs()

To be implemented by subclasses to traverse all specs for which there is a directory within the root.

build_packages_path(spec)
check_installed(spec)

Checks whether a spec is installed.

Return the spec’s prefix, if it is installed, None otherwise.

Raise an exception if the install is inconsistent or corrupt.

create_install_directory(spec)

Creates the installation directory for a spec.

deprecated_file_name(spec)

Gets name of deprecated spec file in deprecated dir

deprecated_file_path(deprecated_spec, deprecator_spec=None)

Gets full path to spec file for deprecated spec

If the deprecator_spec is provided, use that. Otherwise, assume deprecated_spec is already deprecated and its prefix links to the prefix of its deprecator.

disable_upstream_check()
hidden_file_paths

Return a list of hidden files used by the directory layout.

Paths are relative to the root of an install directory.

If the directory layout uses no hidden files to maintain state, this should return an empty container, e.g. [] or (,).

metadata_path(spec)
read_spec(path)

Read the contents of a file and parse them as a spec

relative_path_for_spec(spec)

Implemented by subclasses to return a relative path from the install root to a unique location for the provided spec.

spec_file_path(spec)

Gets full path to spec file

specs_by_hash()
write_spec(spec, path)

Write a spec out to a file.

class spack.directory_layout.YamlViewExtensionsLayout(view, layout)

Bases: spack.directory_layout.ExtensionsLayout

Maintain extensions within a view.

add_extension(spec, ext_spec)

Add to the list of currently installed extensions.

check_activated(spec, ext_spec)

Ensure that ext_spec can be removed from spec.

If not, raise NoSuchExtensionError.

check_extension_conflict(spec, ext_spec)

Ensure that ext_spec can be activated in spec.

If not, raise ExtensionAlreadyInstalledError or ExtensionConflictError.

extension_file_path(spec)

Gets full path to an installed package’s extension file, which keeps track of all the extensions for that package which have been added to this view.

extension_map(spec)

Defensive copying version of _extension_map() for external API.

remove_extension(spec, ext_spec)

Remove from the list of currently installed extensions.

spack.environment module

class spack.environment.Environment(path, init_file=None, with_view=None)

Bases: object

active

True if this environment is currently active.

add(user_spec, list_name='specs')

Add a single user_spec (non-concretized) to the Environment

Returns:
True if the spec was added, False if it was already
present and did not need to be added
Return type:(bool)
add_default_view_to_shell(shell)
added_specs()

Specs that are not yet installed.

Yields the user spec for non-concretized specs, and the concrete spec for already concretized but not yet installed specs.

all_hashes()

Return hashes of all specs.

Note these hashes exclude build dependencies.

all_specs()

Return all specs, even those a user spec would shadow.

check_views()

Checks if the environments default view can be activated.

clear()
concretize(force=False)

Concretize user_specs in this environment.

Only concretizes specs that haven’t been concretized yet unless force is True.

This only modifies the environment in memory. write() will write out a lockfile containing concretized specs.

Parameters:force (bool) – re-concretize ALL specs, even those that were already concretized
Returns:List of specs that have been concretized. Each entry is a tuple of the user spec and the corresponding concretized spec.
concretize_and_add(user_spec, concrete_spec=None)

Concretize and add a single spec to the environment.

Concretize the provided user_spec and add it along with the concretized result to the environment. If the given user_spec was already present in the environment, this does not add a duplicate. The concretized spec will be added unless the user_spec was already present and an associated concrete spec was already present.

Parameters:concrete_spec – if provided, then it is assumed that it is the result of concretizing the provided user_spec
concretized_specs()

Tuples of (user spec, concrete spec) for all concrete specs.

config_scopes()

A list of all configuration scopes for this environment.

default_view
destroy()

Remove this environment from Spack entirely.

env_file_config_scope()

Get the configuration scope for the environment’s manifest file.

env_file_config_scope_name()

Name of the config scope of this environment’s manifest file.

env_subdir_path

Path to directory where the env stores repos, logs, views.

included_config_scopes()

List of included configuration scopes from the environment.

Scopes are listed in the YAML file in order from highest to lowest precedence, so configuration from earlier scope will take precedence over later ones.

This routine returns them in the order they should be pushed onto the internal scope stack (so, in reverse, from lowest to highest).

install(user_spec, concrete_spec=None, **install_args)

Install a single spec into an environment.

This will automatically concretize the single spec, but it won’t affect other as-yet unconcretized specs.

install_all(args=None)

Install all concretized specs in an environment.

Note: this does not regenerate the views for the environment; that needs to be done separately with a call to write().

internal

Whether this environment is managed by Spack.

lock_path

Path to spack.lock file in this environment.

log_path
manifest_path

Path to spack.yaml file in this environment.

name

Human-readable representation of the environment.

This is the path for directory environments, and just the name for named environments.

regenerate_views()
remove(query_spec, list_name='specs', force=False)

Remove specs from an environment that match a query_spec

removed_specs()

Tuples of (user spec, concrete spec) for all specs that will be removed on nexg concretize.

repo
repos_path
rm_default_view_from_shell(shell)
roots()

Specs explicitly requested by the user in this environment.

Yields both added and installed specs that have user specs in spack.yaml.

set_config(path, value)

Set configuration for this environment

update_default_view(viewpath)
update_stale_references(from_list=None)

Iterate over spec lists updating references.

user_specs
view_path_default
write(regenerate_views=True)

Writes an in-memory environment to its location on disk.

Write out package files for each newly concretized spec. Also regenerate any views associated with the environment, if regenerate_views is True.

Parameters:regenerate_views (bool) – regenerate views as well as writing if True.
write_transaction()

Get a write lock context manager for use in a with block.

exception spack.environment.SpackEnvironmentError(message, long_message=None)

Bases: spack.error.SpackError

Superclass for all errors to do with Spack environments.

class spack.environment.ViewDescriptor(base_path, root, projections={}, select=[], exclude=[], link='all')

Bases: object

static from_dict(base_path, d)
regenerate(all_specs, roots)
to_dict()
view()
spack.environment.activate(env, use_env_repo=False, add_view=True, shell='sh', prompt=None)

Activate an environment.

To activate an environment, we add its configuration scope to the existing Spack configuration, and we set active to the current environment.

Parameters:
  • env (Environment) – the environment to activate
  • use_env_repo (bool) – use the packages exactly as they appear in the environment’s repository
  • add_view (bool) – generate commands to add view to path variables
  • shell (string) – One of sh, csh, fish.
  • prompt (string) – string to add to the users prompt, or None
Returns:

Shell commands to activate environment.

Return type:

cmds

TODO: environment to use the activated spack environment.

spack.environment.active(name)

True if the named environment is active.

spack.environment.all_environment_names()

List the names of environments that currently exist.

spack.environment.all_environments()

Generator for all named Environments.

spack.environment.config_dict(yaml_data)

Get the configuration scope section out of an spack.yaml

spack.environment.create(name, init_file=None, with_view=None)

Create a named environment in Spack.

spack.environment.deactivate(shell='sh')

Undo any configuration or repo settings modified by activate().

Parameters:shell (string) – One of sh, csh, fish. Shell style to use.
Returns:shell commands for shell to undo environment variables
Return type:(string)
spack.environment.deactivate_config_scope(env)

Remove any scopes from env from the global config path.

spack.environment.default_manifest_yaml = '# This is a Spack Environment file.\n#\n# It describes a set of packages to be installed, along with\n# configuration settings.\nspack:\n # add package specs to the `specs` list\n specs: []\n view: true\n'

default spack.yaml file to put in new environments

spack.environment.display_specs(concretized_specs)

Displays the list of specs returned by Environment.concretize().

Parameters:concretized_specs (list) – list of specs returned by Environment.concretize()
spack.environment.env_path = '/home/docs/checkouts/readthedocs.org/user_builds/spack/checkouts/v0.15.3/lib/spack/docs/_spack_root/var/spack/environments'

path where environments are stored in the spack tree

spack.environment.env_subdir_name = '.spack-env'

Name of the directory where environments store repos, logs, views

spack.environment.exists(name)

Whether an environment with this name exists or not.

spack.environment.find_environment(args)

Find active environment from args, spack.yaml, or environment variable.

This is called in spack.main to figure out which environment to activate.

Check for an environment in this order:
  1. via spack -e ENV or spack -D DIR (arguments)
  2. as a spack.yaml file in the current directory, or
  3. via a path in the SPACK_ENV environment variable.

If an environment is found, read it in. If not, return None.

Parameters:args (Namespace) – argparse namespace wtih command arguments
Returns:a found environment, or None
Return type:(Environment)
spack.environment.get_env(args, cmd_name, required=False)

Used by commands to get the active environment.

This first checks for an env argument, then looks at the active environment. We check args first because Spack’s subcommand arguments are parsed after the -e and -D arguments to spack. So there may be an env argument that is not the active environment, and we give it precedence.

This is used by a number of commands for determining whether there is an active environment.

If an environment is not found and is required, print an error message that says the calling command needs an active environment.

Parameters:
  • args (Namespace) – argparse namespace wtih command arguments
  • cmd_name (str) – name of calling command
  • required (bool) – if True, raise an exception when no environment is found; if False, just return None
Returns:

if there is an arg or active environment

Return type:

(Environment)

spack.environment.is_env_dir(path)

Whether a directory contains a spack environment.

spack.environment.lockfile_format_version = 2

version of the lockfile format. Must increase monotonically.

spack.environment.lockfile_name = 'spack.lock'

Name of the input yaml file for an environment

spack.environment.make_repo_path(root)

Make a RepoPath from the repo subdirectories in an environment.

spack.environment.manifest_name = 'spack.yaml'

Name of the input yaml file for an environment

spack.environment.prepare_config_scope(env)

Add env’s scope to the global configuration search path.

spack.environment.read(name)

Get an environment with the supplied name.

spack.environment.root(name)

Get the root directory for an environment by name.

spack.environment.spack_env_var = 'SPACK_ENV'

environment variable used to indicate the active environment

spack.environment.valid_env_name(name)
spack.environment.valid_environment_name_re = '^\\w[\\w-]*$'

regex for validating enviroment names

spack.environment.validate_env_name(name)
spack.environment.yaml_equivalent(first, second)

Returns whether two spack yaml items are equivalent, including overrides

spack.error module

exception spack.error.NoHeadersError(message, long_message=None)

Bases: spack.error.SpackError

Raised when package headers are requested but cannot be found

exception spack.error.NoLibrariesError(message_or_name, prefix=None)

Bases: spack.error.SpackError

Raised when package libraries are requested but cannot be found

exception spack.error.SpackError(message, long_message=None)

Bases: Exception

This is the superclass for all Spack errors. Subclasses can be found in the modules they have to do with.

die()
long_message
print_context()

Print extended debug information about this exception.

This is usually printed when the top-level Spack error handler calls die(), but it can be called separately beforehand if a lower-level error handler needs to print error context and continue without raising the exception to the top level.

exception spack.error.SpecError(message, long_message=None)

Bases: spack.error.SpackError

Superclass for all errors that occur while constructing specs.

exception spack.error.UnsatisfiableSpecError(provided, required, constraint_type)

Bases: spack.error.SpecError

Raised when a spec conflicts with package constraints. Provide the requirement that was violated when raising.

exception spack.error.UnsupportedPlatformError(message)

Bases: spack.error.SpackError

Raised by packages when a platform is not supported

spack.error.debug = False

whether we should write stack traces or short error messages this is module-scoped because it needs to be set very early

spack.extensions module

Service functions and classes to implement the hooks for Spack’s command extensions.

exception spack.extensions.CommandNotFoundError(cmd_name)

Bases: spack.error.SpackError

Exception class thrown when a requested command is not recognized as such.

exception spack.extensions.ExtensionNamingError(path)

Bases: spack.error.SpackError

Exception class thrown when a configured extension does not follow the expected naming convention.

spack.extensions.extension_name(path)

Returns the name of the extension in the path passed as argument.

Parameters:path (str) – path where the extension resides
Returns:The extension name.
Raises:ExtensionNamingError – if path does not match the expected format for a Spack command extension.
spack.extensions.get_command_paths()

Return the list of paths where to search for command files.

spack.extensions.get_module(cmd_name)

Imports the extension module for a particular command name and returns it.

Parameters:cmd_name (str) – name of the command for which to get a module (contains -, not _).
spack.extensions.get_template_dirs()

Returns the list of directories where to search for templates in extensions.

spack.extensions.load_command_extension(command, path)

Loads a command extension from the path passed as argument.

Parameters:
  • command (str) – name of the command (contains -, not _).
  • path (str) – base path of the command extension
Returns:

A valid module if found and loadable; None if not found. Module

loading exceptions are passed through.

spack.extensions.path_for_extension(target_name, *paths)

Return the test root dir for a given extension.

Parameters:
  • target_name (str) – name of the extension to test
  • *paths – paths where the extensions reside
Returns:

Root directory where tests should reside or None

spack.fetch_strategy module

Fetch strategies are used to download source code into a staging area in order to build it. They need to define the following methods:

  • fetch()
    This should attempt to download/check out source from somewhere.
  • check()
    Apply a checksum to the downloaded source code, e.g. for an archive. May not do anything if the fetch method was safe to begin with.
  • expand()
    Expand (e.g., an archive) downloaded file to source, with the standard stage source path as the destination directory.
  • reset()
    Restore original state of downloaded code. Used by clean commands. This may just remove the expanded source and re-expand an archive, or it may run something like git reset –hard.
  • archive()
    Archive a source directory, e.g. for creating a mirror.
class spack.fetch_strategy.BundleFetchStrategy(**kwargs)

Bases: spack.fetch_strategy.FetchStrategy

Fetch strategy associated with bundle, or no-code, packages.

Having a basic fetch strategy is a requirement for executing post-install hooks. Consequently, this class provides the API but does little more than log messages.

TODO: Remove this class by refactoring resource handling and the link between composite stages and composite fetch strategies (see #11981).

cachable

Report False as there is no code to cache.

fetch()

Simply report success – there is no code to fetch.

mirror_id()

BundlePackages don’t have a mirror id.

source_id()

BundlePackages don’t have a source id.

url_attr = ''

There is no associated URL keyword in version() for no-code packages but this property is required for some strategy-related functions (e.g., check_pkg_attributes).

class spack.fetch_strategy.CacheURLFetchStrategy(url=None, checksum=None, **kwargs)

Bases: spack.fetch_strategy.URLFetchStrategy

The resource associated with a cache URL may be out of date.

fetch()

Fetch source code archive or repo.

Returns:True on success, False on failure.
Return type:bool
exception spack.fetch_strategy.ChecksumError(message, long_message=None)

Bases: spack.fetch_strategy.FetchError

Raised when archive fails to checksum.

exception spack.fetch_strategy.ExtrapolationError(message, long_message=None)

Bases: spack.fetch_strategy.FetchError

Raised when we can’t extrapolate a version for a package.

exception spack.fetch_strategy.FailedDownloadError(url, msg='')

Bases: spack.fetch_strategy.FetchError

Raised when a download fails.

exception spack.fetch_strategy.FetchError(message, long_message=None)

Bases: spack.error.SpackError

Superclass fo fetcher errors.

class spack.fetch_strategy.FetchStrategy(**kwargs)

Bases: object

Superclass of all fetch strategies.

archive(destination)

Create an archive of the downloaded data for a mirror.

For downloaded files, this should preserve the checksum of the original file. For repositories, it should just create an expandable tarball out of the downloaded repository.

cachable

Whether fetcher is capable of caching the resource it retrieves.

This generally is determined by whether the resource is identifiably associated with a specific package version.

Returns:True if can cache, False otherwise.
Return type:bool
check()

Checksum the archive fetched by this FetchStrategy.

expand()

Expand the downloaded archive into the stage source path.

fetch()

Fetch source code archive or repo.

Returns:True on success, False on failure.
Return type:bool
classmethod matches(args)

Predicate that matches fetch strategies to arguments of the version directive.

Parameters:args – arguments of the version directive
mirror_id()

This is a unique ID for a source that is intended to help identify reuse of resources across packages.

It is unique like source-id, but it does not include the package name and is not necessarily easy for a human to create themselves.

optional_attrs = []

classes have multiple url_attrs at the top-level.

Type:Optional attributes can be used to distinguish fetchers when
reset()

Revert to freshly downloaded state.

For archive files, this may just re-expand the archive.

source_id()

A unique ID for the source.

It is intended that a human could easily generate this themselves using the information available to them in the Spack package.

The returned value is added to the content which determines the full hash for a package using str().

url_attr = None

The URL attribute must be specified either at the package class level, or as a keyword argument to version(). It is used to distinguish fetchers for different versions in the package DSL.

exception spack.fetch_strategy.FetcherConflict(message, long_message=None)

Bases: spack.fetch_strategy.FetchError

Raised for packages with invalid fetch attributes.

class spack.fetch_strategy.FsCache(root)

Bases: object

destroy()
fetcher(target_path, digest, **kwargs)
store(fetcher, relative_dest)
class spack.fetch_strategy.GitFetchStrategy(**kwargs)

Bases: spack.fetch_strategy.VCSFetchStrategy

Fetch strategy that gets source code from a git repository. Use like this in a package:

version(‘name’, git=’https://github.com/project/repo.git’)

Optionally, you can provide a branch, or commit to check out, e.g.:

version(‘1.1’, git=’https://github.com/project/repo.git’, tag=’v1.1’)

You can use these three optional attributes in addition to git:

  • branch: Particular branch to build from (default is the
    repository’s default branch)
  • tag: Particular tag to check out
  • commit: Particular commit hash in the repo

Repositories are cloned into the standard stage source path directory.

archive(destination)

Create an archive of the downloaded data for a mirror.

For downloaded files, this should preserve the checksum of the original file. For repositories, it should just create an expandable tarball out of the downloaded repository.

cachable

Whether fetcher is capable of caching the resource it retrieves.

This generally is determined by whether the resource is identifiably associated with a specific package version.

Returns:True if can cache, False otherwise.
Return type:bool
fetch()

Fetch source code archive or repo.

Returns:True on success, False on failure.
Return type:bool
git
git_version
mirror_id()

This is a unique ID for a source that is intended to help identify reuse of resources across packages.

It is unique like source-id, but it does not include the package name and is not necessarily easy for a human to create themselves.

optional_attrs = ['tag', 'branch', 'commit', 'submodules', 'get_full_repo', 'submodules_delete']
protocol_supports_shallow_clone()

Shallow clone operations (–depth #) are not supported by the basic HTTP protocol or by no-protocol file specifications. Use (e.g.) https:// or file:// instead.

reset()

Revert to freshly downloaded state.

For archive files, this may just re-expand the archive.

source_id()

A unique ID for the source.

It is intended that a human could easily generate this themselves using the information available to them in the Spack package.

The returned value is added to the content which determines the full hash for a package using str().

url_attr = 'git'
class spack.fetch_strategy.GoFetchStrategy(**kwargs)

Bases: spack.fetch_strategy.VCSFetchStrategy

Fetch strategy that employs the go get infrastructure.

Use like this in a package:

version(‘name’,
go=’github.com/monochromegane/the_platinum_searcher/…’)

Go get does not natively support versions, they can be faked with git.

The fetched source will be moved to the standard stage sourcepath directory during the expand step.

archive(destination)

Create an archive of the downloaded data for a mirror.

For downloaded files, this should preserve the checksum of the original file. For repositories, it should just create an expandable tarball out of the downloaded repository.

expand()

Expand the downloaded archive into the stage source path.

fetch()

Fetch source code archive or repo.

Returns:True on success, False on failure.
Return type:bool
go
go_version
reset()

Revert to freshly downloaded state.

For archive files, this may just re-expand the archive.

url_attr = 'go'
class spack.fetch_strategy.HgFetchStrategy(**kwargs)

Bases: spack.fetch_strategy.VCSFetchStrategy

Fetch strategy that gets source code from a Mercurial repository. Use like this in a package:

version(‘name’, hg=’https://jay.grs.rwth-aachen.de/hg/lwm2’)

Optionally, you can provide a branch, or revision to check out, e.g.:

version(‘torus’,
hg=’https://jay.grs.rwth-aachen.de/hg/lwm2’, branch=’torus’)

You can use the optional ‘revision’ attribute to check out a branch, tag, or particular revision in hg. To prevent non-reproducible builds, using a moving target like a branch is discouraged.

  • revision: Particular revision, branch, or tag.

Repositories are cloned into the standard stage source path directory.

archive(destination)

Create an archive of the downloaded data for a mirror.

For downloaded files, this should preserve the checksum of the original file. For repositories, it should just create an expandable tarball out of the downloaded repository.

cachable

Whether fetcher is capable of caching the resource it retrieves.

This generally is determined by whether the resource is identifiably associated with a specific package version.

Returns:True if can cache, False otherwise.
Return type:bool
fetch()

Fetch source code archive or repo.

Returns:True on success, False on failure.
Return type:bool
hg

The hg executable :rtype: Executable

Type:returns
mirror_id()

This is a unique ID for a source that is intended to help identify reuse of resources across packages.

It is unique like source-id, but it does not include the package name and is not necessarily easy for a human to create themselves.

optional_attrs = ['revision']
reset()

Revert to freshly downloaded state.

For archive files, this may just re-expand the archive.

source_id()

A unique ID for the source.

It is intended that a human could easily generate this themselves using the information available to them in the Spack package.

The returned value is added to the content which determines the full hash for a package using str().

url_attr = 'hg'
exception spack.fetch_strategy.InvalidArgsError(pkg=None, version=None, **args)

Bases: spack.fetch_strategy.FetchError

Raised when a version can’t be deduced from a set of arguments.

exception spack.fetch_strategy.NoArchiveFileError(message, long_message=None)

Bases: spack.fetch_strategy.FetchError

“Raised when an archive file is expected but none exists.

exception spack.fetch_strategy.NoCacheError(message, long_message=None)

Bases: spack.fetch_strategy.FetchError

Raised when there is no cached archive for a package.

exception spack.fetch_strategy.NoDigestError(message, long_message=None)

Bases: spack.fetch_strategy.FetchError

Raised after attempt to checksum when URL has no digest.

exception spack.fetch_strategy.NoStageError(method)

Bases: spack.fetch_strategy.FetchError

Raised when fetch operations are called before set_stage().

class spack.fetch_strategy.S3FetchStrategy(*args, **kwargs)

Bases: spack.fetch_strategy.URLFetchStrategy

FetchStrategy that pulls from an S3 bucket.

fetch()

Fetch source code archive or repo.

Returns:True on success, False on failure.
Return type:bool
url_attr = 's3'
class spack.fetch_strategy.SvnFetchStrategy(**kwargs)

Bases: spack.fetch_strategy.VCSFetchStrategy

Fetch strategy that gets source code from a subversion repository.

Use like this in a package:

version(‘name’, svn=’http://www.example.com/svn/trunk’)

Optionally, you can provide a revision for the URL:

version(‘name’, svn=’http://www.example.com/svn/trunk’,
revision=‘1641’)

Repositories are checked out into the standard stage source path directory.

archive(destination)

Create an archive of the downloaded data for a mirror.

For downloaded files, this should preserve the checksum of the original file. For repositories, it should just create an expandable tarball out of the downloaded repository.

cachable

Whether fetcher is capable of caching the resource it retrieves.

This generally is determined by whether the resource is identifiably associated with a specific package version.

Returns:True if can cache, False otherwise.
Return type:bool
fetch()

Fetch source code archive or repo.

Returns:True on success, False on failure.
Return type:bool
mirror_id()

This is a unique ID for a source that is intended to help identify reuse of resources across packages.

It is unique like source-id, but it does not include the package name and is not necessarily easy for a human to create themselves.

optional_attrs = ['revision']
reset()

Revert to freshly downloaded state.

For archive files, this may just re-expand the archive.

source_id()

A unique ID for the source.

It is intended that a human could easily generate this themselves using the information available to them in the Spack package.

The returned value is added to the content which determines the full hash for a package using str().

svn
url_attr = 'svn'
class spack.fetch_strategy.URLFetchStrategy(url=None, checksum=None, **kwargs)

Bases: spack.fetch_strategy.FetchStrategy

URLFetchStrategy pulls source code from a URL for an archive, check the archive against a checksum, and decompresses the archive.

The destination for the resulting file(s) is the standard stage path.

archive(destination)

Just moves this archive to the destination.

archive_file

Path to the source archive within this stage directory.

cachable

Whether fetcher is capable of caching the resource it retrieves.

This generally is determined by whether the resource is identifiably associated with a specific package version.

Returns:True if can cache, False otherwise.
Return type:bool
candidate_urls
check()

Check the downloaded archive against a checksum digest. No-op if this stage checks code out of a repository.

curl
expand()

Expand the downloaded archive into the stage source path.

fetch()

Fetch source code archive or repo.

Returns:True on success, False on failure.
Return type:bool
mirror_id()

This is a unique ID for a source that is intended to help identify reuse of resources across packages.

It is unique like source-id, but it does not include the package name and is not necessarily easy for a human to create themselves.

optional_attrs = ['md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512', 'checksum']
reset()

Removes the source path if it exists, then re-expands the archive.

source_id()

A unique ID for the source.

It is intended that a human could easily generate this themselves using the information available to them in the Spack package.

The returned value is added to the content which determines the full hash for a package using str().

url_attr = 'url'
class spack.fetch_strategy.VCSFetchStrategy(**kwargs)

Bases: spack.fetch_strategy.FetchStrategy

Superclass for version control system fetch strategies.

Like all fetchers, VCS fetchers are identified by the attributes passed to the version directive. The optional_attrs for a VCS fetch strategy represent types of revisions, e.g. tags, branches, commits, etc.

The required attributes (git, svn, etc.) are used to specify the URL and to distinguish a VCS fetch strategy from a URL fetch strategy.

archive(destination, **kwargs)

Create an archive of the downloaded data for a mirror.

For downloaded files, this should preserve the checksum of the original file. For repositories, it should just create an expandable tarball out of the downloaded repository.

check()

Checksum the archive fetched by this FetchStrategy.

expand()

Expand the downloaded archive into the stage source path.

spack.fetch_strategy.all_strategies = [<class 'spack.fetch_strategy.BundleFetchStrategy'>, <class 'spack.fetch_strategy.URLFetchStrategy'>, <class 'spack.fetch_strategy.CacheURLFetchStrategy'>, <class 'spack.fetch_strategy.GoFetchStrategy'>, <class 'spack.fetch_strategy.GitFetchStrategy'>, <class 'spack.fetch_strategy.SvnFetchStrategy'>, <class 'spack.fetch_strategy.HgFetchStrategy'>, <class 'spack.fetch_strategy.S3FetchStrategy'>]

List of all fetch strategies, created by FetchStrategy metaclass.

spack.fetch_strategy.check_pkg_attributes(pkg)

Find ambiguous top-level fetch attributes in a package.

Currently this only ensures that two or more VCS fetch strategies are not specified at once.

spack.fetch_strategy.fetcher(cls)

Decorator used to register fetch strategies.

spack.fetch_strategy.for_package_version(pkg, version)

Determine a fetch strategy based on the arguments supplied to version() in the package description.

spack.fetch_strategy.from_kwargs(**kwargs)

Construct an appropriate FetchStrategy from the given keyword arguments.

Parameters:**kwargs – dictionary of keyword arguments, e.g. from a version() directive in a package.
Returns:
The fetch strategy that matches the args, based
on attribute names (e.g., git, hg, etc.)
Return type:fetch_strategy
Raises:FetchError – If no fetch_strategy matches the args.
spack.fetch_strategy.from_list_url(pkg)

If a package provides a URL which lists URLs for resources by version, this can can create a fetcher for a URL discovered for the specified package’s version.

spack.fetch_strategy.from_url(url)

Given a URL, find an appropriate fetch strategy for it. Currently just gives you a URLFetchStrategy that uses curl.

TODO: make this return appropriate fetch strategies for other
types of URLs.
spack.fetch_strategy.from_url_scheme(url, *args, **kwargs)

Finds a suitable FetchStrategy by matching its url_attr with the scheme in the given url.

spack.fetch_strategy.stable_target(fetcher)

Returns whether the fetcher target is expected to have a stable checksum. This is only true if the target is a preexisting archive file.

spack.fetch_strategy.warn_content_type_mismatch(subject, content_type='HTML')

spack.filesystem_view module

class spack.filesystem_view.FilesystemView(root, layout, **kwargs)

Bases: object

Governs a filesystem view that is located at certain root-directory.

Packages are linked from their install directories into a common file hierachy.

In distributed filesystems, loading each installed package seperately can lead to slow-downs due to too many directories being traversed. This can be circumvented by loading all needed modules into a common directory structure.

add_extension(spec)

Add (link) an extension in this view. Does not add dependencies.

add_specs(*specs, **kwargs)

Add given specs to view.

The supplied specs might be standalone packages or extensions of other packages.

Should accept with_dependencies as keyword argument (default True) to indicate wether or not dependencies should be activated as well.

Should except an exclude keyword argument containing a list of regexps that filter out matching spec names.

This method should make use of activate_{extension,standalone}.

add_standalone(spec)

Add (link) a standalone package into this view.

check_added(spec)

Check if the given concrete spec is active in this view.

get_all_specs()

Get all specs currently active in this view.

get_projection_for_spec(spec)

Get the projection in this view for a spec.

get_spec(spec)

Return the actual spec linked in this view (i.e. do not look it up in the database by name).

spec can be a name or a spec from which the name is extracted.

As there can only be a single version active for any spec the name is enough to identify the spec in the view.

If no spec is present, returns None.

print_status(*specs, **kwargs)
Print a short summary about the given specs, detailing whether..
  • ..they are active in the view.
  • ..they are active but the activated version differs.
  • ..they are not activte in the view.

Takes with_dependencies keyword argument so that the status of dependencies is printed as well.

remove_extension(spec)

Remove (unlink) an extension from this view.

remove_specs(*specs, **kwargs)

Removes given specs from view.

The supplied spec might be a standalone package or an extension of another package.

Should accept with_dependencies as keyword argument (default True) to indicate wether or not dependencies should be deactivated as well.

Should accept with_dependents as keyword argument (default True) to indicate wether or not dependents on the deactivated specs should be removed as well.

Should except an exclude keyword argument containing a list of regexps that filter out matching spec names.

This method should make use of deactivate_{extension,standalone}.

remove_standalone(spec)

Remove (unlink) a standalone package from this view.

class spack.filesystem_view.YamlFilesystemView(root, layout, **kwargs)

Bases: spack.filesystem_view.FilesystemView

Filesystem view to work with a yaml based directory layout.

add_extension(spec)

Add (link) an extension in this view. Does not add dependencies.

add_specs(*specs, **kwargs)

Add given specs to view.

The supplied specs might be standalone packages or extensions of other packages.

Should accept with_dependencies as keyword argument (default True) to indicate wether or not dependencies should be activated as well.

Should except an exclude keyword argument containing a list of regexps that filter out matching spec names.

This method should make use of activate_{extension,standalone}.

add_standalone(spec)

Add (link) a standalone package into this view.

check_added(spec)

Check if the given concrete spec is active in this view.

clean()
get_all_specs()

Get all specs currently active in this view.

get_conflicts(*specs)

Return list of tuples (<spec>, <spec in view>) where the spec active in the view differs from the one to be activated.

get_path_meta_folder(spec)

Get path to meta folder for either spec or spec name.

get_projection_for_spec(spec)

Return the projection for a spec in this view.

Relies on the ordering of projections to avoid ambiguity.

get_spec(spec)

Return the actual spec linked in this view (i.e. do not look it up in the database by name).

spec can be a name or a spec from which the name is extracted.

As there can only be a single version active for any spec the name is enough to identify the spec in the view.

If no spec is present, returns None.

merge(spec, ignore=None)
print_conflict(spec_active, spec_specified, level='error')

Singular print function for spec conflicts.

print_status(*specs, **kwargs)
Print a short summary about the given specs, detailing whether..
  • ..they are active in the view.
  • ..they are active but the activated version differs.
  • ..they are not activte in the view.

Takes with_dependencies keyword argument so that the status of dependencies is printed as well.

read_projections()
remove_extension(spec, with_dependents=True)

Remove (unlink) an extension from this view.

remove_file(src, dest)
remove_specs(*specs, **kwargs)

Removes given specs from view.

The supplied spec might be a standalone package or an extension of another package.

Should accept with_dependencies as keyword argument (default True) to indicate wether or not dependencies should be deactivated as well.

Should accept with_dependents as keyword argument (default True) to indicate wether or not dependents on the deactivated specs should be removed as well.

Should except an exclude keyword argument containing a list of regexps that filter out matching spec names.

This method should make use of deactivate_{extension,standalone}.

remove_standalone(spec)

Remove (unlink) a standalone package from this view.

unmerge(spec, ignore=None)
write_projections()

spack.graph module

Functions for graphing DAGs of dependencies.

This file contains code for graphing DAGs of software packages (i.e. Spack specs). There are two main functions you probably care about:

graph_ascii() will output a colored graph of a spec in ascii format, kind of like the graph git shows with “git log –graph”, e.g.:

o  mpileaks
|\
| |\
| o |  callpath
|/| |
| |\|
| |\ \
| | |\ \
| | | | o  adept-utils
| |_|_|/|
|/| | | |
o | | | |  mpi
 / / / /
| | o |  dyninst
| |/| |
|/|/| |
| | |/
| o |  libdwarf
|/ /
o |  libelf
 /
o  boost

graph_dot() will output a graph of a spec (or multiple specs) in dot format.

Note that graph_ascii assumes a single spec while graph_dot can take a number of specs as input.

spack.graph.topological_sort(spec, reverse=False, deptype='all')

Topological sort for specs.

Return a list of dependency specs sorted topologically. The spec argument is not modified in the process.

spack.graph.graph_ascii(spec, node='o', out=None, debug=False, indent=0, color=None, deptype='all')
class spack.graph.AsciiGraph

Bases: object

write(spec, color=None, out=None)

Write out an ascii graph of the provided spec.

Arguments: spec – spec to graph. This only handles one spec at a time.

Optional arguments:

out – file object to write out to (default is sys.stdout)

color – whether to write in color. Default is to autodetect
based on output file.
spack.graph.graph_dot(specs, deptype='all', static=False, out=None)

Generate a graph in dot format of all provided specs.

Print out a dot formatted graph of all the dependencies between package. Output can be passed to graphviz, e.g.:

spack graph --dot qt | dot -Tpdf > spack-graph.pdf

spack.hash_types module

Definitions that control how Spack creates Spec hashes.

class spack.hash_types.SpecHashDescriptor(deptype=('link', 'run'), package_hash=False, attr=None)

Bases: object

This class defines how hashes are generated on Spec objects.

Spec hashes in Spack are generated from a serialized (e.g., with YAML) representation of the Spec graph. The representation may only include certain dependency types, and it may optionally include a canonicalized hash of the package.py for each node in the graph.

We currently use different hashes for different use cases.

spack.hash_types.build_hash = <spack.hash_types.SpecHashDescriptor object>

Hash descriptor that includes build dependencies.

spack.hash_types.dag_hash = <spack.hash_types.SpecHashDescriptor object>

Default Hash descriptor, used by Spec.dag_hash() and stored in the DB.

spack.hash_types.full_hash = <spack.hash_types.SpecHashDescriptor object>

Full hash used in build pipelines to determine when to rebuild packages.

spack.installer module

This module encapsulates package installation functionality.

The PackageInstaller coordinates concurrent builds of packages for the same Spack instance by leveraging the dependency DAG and file system locks. It also proceeds with the installation of non-dependent packages of failed dependencies in order to install as many dependencies of a package as possible.

Bottom-up traversal of the dependency DAG while prioritizing packages with no uninstalled dependencies allows multiple processes to perform concurrent builds of separate packages associated with a spec.

File system locks enable coordination such that no two processes attempt to build the same or a failed dependency package.

Failures to install dependency packages result in removal of their dependents’ build tasks from the current process. A failure file is also written (and locked) so that other processes can detect the failure and adjust their build tasks accordingly.

This module supports the coordination of local and distributed concurrent installations of packages in a Spack instance.

class spack.installer.BuildTask(pkg, compiler, start, attempts, status, installed)

Bases: object

Class for representing the build task for a package.

flag_installed(installed)

Ensure the dependency is not considered to still be uninstalled.

Parameters:installed (list of str) – the identifiers of packages that have been installed so far
key

The key is the tuple (# uninstalled dependencies, sequence).

priority

The priority is based on the remaining uninstalled dependencies.

spec

The specification associated with the package.

exception spack.installer.ExternalPackageError(message, long_msg=None)

Bases: spack.installer.InstallError

Raised by install() when a package is only for external use.

exception spack.installer.InstallError(message, long_msg=None)

Bases: spack.error.SpackError

Raised when something goes wrong during install or uninstall.

exception spack.installer.InstallLockError(message, long_msg=None)

Bases: spack.installer.InstallError

Raised during install when something goes wrong with package locking.

class spack.installer.PackageInstaller(pkg)

Bases: object

Class for managing the install process for a Spack instance based on a bottom-up DAG approach.

This installer can coordinate concurrent batch and interactive, local and distributed (on a shared file system) builds for the same Spack instance.

install(**kwargs)

Install the package and/or associated dependencies.

Parameters:
  • cache_only (bool) – Fail if binary package unavailable.
  • dirty (bool) – Don’t clean the build environment before installing.
  • explicit (bool) – True if package was explicitly installed, False if package was implicitly installed (as a dependency).
  • fail_fast (bool) – Fail if any dependency fails to install; otherwise, the default is to install as many dependencies as possible (i.e., best effort installation).
  • fake (bool) – Don’t really build; install fake stub files instead.
  • force (bool) – Install again, even if already installed.
  • install_deps (bool) – Install dependencies before installing this package
  • install_source (bool) – By default, source is not installed, but for debugging it might be useful to keep it around.
  • keep_prefix (bool) – Keep install prefix on failure. By default, destroys it.
  • keep_stage (bool) – By default, stage is destroyed only if there are no exceptions during build. Set to True to keep the stage even with exceptions.
  • restage (bool) – Force spack to restage the package source.
  • skip_patch (bool) – Skip patch stage of build if True.
  • stop_before (InstallPhase) – stop execution before this installation phase (or None)
  • stop_at (InstallPhase) – last installation phase to be executed (or None)
  • tests (bool or list or set) – False to run no tests, True to test all packages, or a list of package names to run tests for some
  • use_cache (bool) – Install from binary package, if available.
  • verbose (bool) – Display verbose build output (by default, suppresses it)
spec

The specification associated with the package.

spack.installer.STATUS_ADDED = 'queued'

Build status indicating task has been added.

spack.installer.STATUS_DEQUEUED = 'dequeued'

Build status indicating the task has been popped from the queue

spack.installer.STATUS_FAILED = 'failed'

Build status indicating the spec failed to install

spack.installer.STATUS_INSTALLED = 'installed'

Build status indicating the spec was sucessfully installed

spack.installer.STATUS_INSTALLING = 'installing'

Build status indicating the spec is being installed (possibly by another process)

spack.installer.STATUS_REMOVED = 'removed'

Build status indicating task has been removed (to maintain priority queue invariants).

exception spack.installer.UpstreamPackageError(message, long_msg=None)

Bases: spack.installer.InstallError

Raised during install when something goes wrong with an upstream package.

spack.installer.clear_failures()

Remove all failure tracking markers for the Spack instance.

spack.installer.dump_packages(spec, path)

Dump all package information for a spec and its dependencies.

This creates a package repository within path for every namespace in the spec DAG, and fills the repos with package files and patch files for every node in the DAG.

Parameters:
  • spec (Spec) – the Spack spec whose package information is to be dumped
  • path (str) – the path to the build packages directory
spack.installer.install_msg(name, pid)

Colorize the name/id of the package being installed

Parameters:
  • name (str) – Name/id of the package being installed
  • pid (id) – id of the installer process
Returns:

(str) Colorized installing message

spack.installer.log(pkg)

Copy provenance into the install directory on success

Parameters:pkg (Package) – the package that was installed and built
spack.installer.package_id(pkg)

A “unique” package identifier for installation purposes

The identifier is used to track build tasks, locks, install, and failure statuses.

Parameters:pkg (PackageBase) – the package from which the identifier is derived

spack.main module

This is the implementation of the Spack command line executable.

In a normal Spack installation, this is invoked from the bin/spack script after the system path is set up.

class spack.main.SpackArgumentParser(prog=None, usage=None, description=None, epilog=None, parents=[], formatter_class=<class 'argparse.HelpFormatter'>, prefix_chars='-', fromfile_prefix_chars=None, argument_default=None, conflict_handler='error', add_help=True, allow_abbrev=True)

Bases: argparse.ArgumentParser

add_command(cmd_name)

Add one subcommand to this parser.

add_subparsers(**kwargs)

Ensure that sensible defaults are propagated to subparsers

format_help(level='short')
format_help_sections(level)

Format help on sections for a particular verbosity level.

Parameters:level (str) – ‘short’ or ‘long’ (more commands shown for long)
class spack.main.SpackCommand(command_name)

Bases: object

Callable object that invokes a spack command (for testing).

Example usage:

install = SpackCommand('install')
install('-v', 'mpich')

Use this to invoke Spack commands directly from Python and check their output.

exception spack.main.SpackCommandError

Bases: Exception

Raised when SpackCommand execution fails.

class spack.main.SpackHelpFormatter(prog, indent_increment=2, max_help_position=24, width=None)

Bases: argparse.RawTextHelpFormatter

spack.main.add_all_commands(parser)

Add all spack subcommands to the parser.

spack.main.aliases = {'rm': 'remove'}

top-level aliases for Spack commands

spack.main.allows_unknown_args(command)

Implements really simple argument injection for unknown arguments.

Commands may add an optional argument called “unknown args” to indicate they can handle unknonwn args, and we’ll pass the unknown args in.

spack.main.get_version()

Get a descriptive version of this instance of Spack.

If this is a git repository, and if it is not on a release tag, return a string like:

release_version-commits_since_release-commit

If we are at a release tag, or if this is not a git repo, return the real spack release number (e.g., 0.13.3).

spack.main.index_commands()

create an index of commands by section for this help level

spack.main.intro_by_level = {'long': 'Complete list of spack commands:', 'short': 'These are common spack commands:'}

intro text for help at different levels

spack.main.levels = ['short', 'long']

help levels in order of detail (i.e., number of commands shown)

spack.main.main(argv=None)

This is the entry point for the Spack command.

Parameters:argv (list of str or None) – command line arguments, NOT including the executable name. If None, parses from sys.argv.
spack.main.make_argument_parser(**kwargs)

Create an basic argument parser without any subcommands added.

spack.main.options_by_level = {'long': 'all', 'short': ['h', 'k', 'V', 'color']}

control top-level spack options shown in basic vs. advanced help

spack.main.print_setup_info(*info)

Print basic information needed by setup-env.[c]sh.

Parameters:info (list of str) – list of things to print: comma-separated list of ‘csh’, ‘sh’, or ‘modules’

This is in main.py to make it fast; the setup scripts need to invoke spack in login scripts, and it needs to be quick.

spack.main.required_command_properties = ['level', 'section', 'description']

Properties that commands are required to set.

spack.main.section_descriptions = {'admin': 'administration', 'basic': 'query packages', 'build': 'build packages', 'config': 'configuration', 'developer': 'developer', 'environment': 'environment', 'extensions': 'extensions', 'help': 'more help', 'packaging': 'create packages', 'system': 'system'}

Longer text for each section, to show in help

spack.main.section_order = {'basic': ['list', 'info', 'find'], 'build': ['fetch', 'stage', 'patch', 'configure', 'build', 'restage', 'install', 'uninstall', 'clean'], 'packaging': ['create', 'edit']}

preferential command order for some sections (e.g., build pipeline is in execution order, not alphabetical)

spack.main.send_warning_to_tty(message, *args)

Redirects messages to tty.warn.

spack.main.set_working_dir()

Change the working directory to getcwd, or spack prefix if no cwd.

spack.main.setup_main_options(args)

Configure spack globals based on the basic options.

spack.main.spack_working_dir = None

Recorded directory where spack command was originally invoked

spack.main.stat_names = {'calls': (((1, -1),), 'call count'), 'cumtime': (((3, -1),), 'cumulative time'), 'cumulative': (((3, -1),), 'cumulative time'), 'filename': (((4, 1),), 'file name'), 'line': (((5, 1),), 'line number'), 'module': (((4, 1),), 'file name'), 'name': (((6, 1),), 'function name'), 'ncalls': (((1, -1),), 'call count'), 'nfl': (((6, 1), (4, 1), (5, 1)), 'name/file/line'), 'pcalls': (((0, -1),), 'primitive call count'), 'stdname': (((7, 1),), 'standard name'), 'time': (((2, -1),), 'internal time'), 'tottime': (((2, -1),), 'internal time')}

names of profile statistics

spack.mirror module

This file contains code for creating spack mirror directories. A mirror is an organized hierarchy containing specially named archive files. This enabled spack to know where to find files in a mirror if the main server for a particular package is down. Or, if the computer where spack is run is not connected to the internet, it allows spack to download packages directly from a mirror (e.g., on an intranet).

class spack.mirror.Mirror(fetch_url, push_url=None, name=None)

Bases: object

Represents a named location for storing source tarballs and binary packages.

Mirrors have a fetch_url that indicate where and how artifacts are fetched from them, and a push_url that indicate where and how artifacts are pushed to them. These two URLs are usually the same.

display(max_len=0)
fetch_url
static from_dict(d, name=None)
static from_json(stream, name=None)
static from_yaml(stream, name=None)
name
push_url
to_dict()
to_json(stream=None)
to_yaml(stream=None)
class spack.mirror.MirrorCollection(mirrors=None, scope=None)

Bases: collections.abc.Mapping

A mapping of mirror names to mirrors.

display()
static from_dict(d)
static from_json(stream, name=None)
static from_yaml(stream, name=None)
lookup(name_or_url)

Looks up and returns a Mirror.

If this MirrorCollection contains a named Mirror under the name [name_or_url], then that mirror is returned. Otherwise, [name_or_url] is assumed to be a mirror URL, and an anonymous mirror with the given URL is returned.

to_dict(recursive=False)
to_json(stream=None)
to_yaml(stream=None)
exception spack.mirror.MirrorError(msg, long_msg=None)

Bases: spack.error.SpackError

Superclass of all mirror-creation related errors.

class spack.mirror.MirrorReference(cosmetic_path, global_path=None)

Bases: object

A MirrorReference stores the relative paths where you can store a package/resource in a mirror directory.

The appropriate storage location is given by storage_path. The cosmetic_path property provides a reference that a human could generate themselves based on reading the details of the package.

A user can iterate over a MirrorReference object to get all the possible names that might be used to refer to the resource in a mirror; this includes names generated by previous naming schemes that are no-longer reported by storage_path or cosmetic_path.

storage_path
class spack.mirror.MirrorStats

Bases: object

added(resource)
already_existed(resource)
error()
next_spec(spec)
stats()
spack.mirror.create(path, specs, skip_unstable_versions=False)

Create a directory to be used as a spack mirror, and fill it with package archives.

Parameters:
  • path – Path to create a mirror directory hierarchy in.
  • specs – Any package versions matching these specs will be added to the mirror.
  • skip_unstable_versions – if true, this skips adding resources when they do not have a stable archive checksum (as determined by fetch_strategy.stable_target)
Return Value:

Returns a tuple of lists: (present, mirrored, error)

  • present: Package specs that were already present.
  • mirrored: Package specs that were successfully mirrored.
  • error: Package specs that failed to mirror due to some error.

This routine iterates through all known package versions, and it creates specs for those versions. If the version satisfies any spec in the specs list, it is downloaded and added to the mirror.

spack.mirror.get_all_versions(specs)

Given a set of initial specs, return a new set of specs that includes each version of each package in the original set.

Note that if any spec in the original set specifies properties other than version, this information will be omitted in the new set; for example; the new set of specs will not include variant settings.

spack.mirror.get_matching_versions(specs, num_versions=1)

Get a spec for EACH known version matching any spec in the list. For concrete specs, this retrieves the concrete version and, if more than one version per spec is requested, retrieves the latest versions of the package.

spack.mirror.mirror_archive_paths(fetcher, per_package_ref, spec=None)

Returns a MirrorReference object which keeps track of the relative storage path of the resource associated with the specified fetcher.

spack.mixins module

This module contains additional behavior that can be attached to any given package.

spack.mixins.filter_compiler_wrappers(*files, **kwargs)

Substitutes any path referring to a Spack compiler wrapper with the path of the underlying compiler that has been used.

If this isn’t done, the files will have CC, CXX, F77, and FC set to Spack’s generic cc, c++, f77, and f90. We want them to be bound to whatever compiler they were built with.

Parameters:
  • *files – files to be filtered relative to the search root (which is, by default, the installation prefix)
  • **kwargs

    allowed keyword arguments

    after
    specifies after which phase the files should be filtered (defaults to ‘install’)
    relative_root
    path relative to prefix where to start searching for the files to be filtered. If not set the install prefix wil be used as the search root. It is highly recommended to set this, as searching from the installation prefix may affect performance severely in some cases.
    ignore_absent, backup
    these two keyword arguments, if present, will be forwarded to filter_file (see its documentation for more information on their behavior)
    recursive
    this keyword argument, if present, will be forwarded to find (see its documentation for more information on the behavior)

spack.multimethod module

This module contains utilities for using multi-methods in spack. You can think of multi-methods like overloaded methods – they’re methods with the same name, and we need to select a version of the method based on some criteria. e.g., for overloaded methods, you would select a version of the method to call based on the types of its arguments.

In spack, multi-methods are used to ease the life of package authors. They allow methods like install() (or other methods called by install()) to declare multiple versions to be called when the package is instantiated with different specs. e.g., if the package is built with OpenMPI on x86_64,, you might want to call a different install method than if it was built for mpich2 on BlueGene/Q. Likewise, you might want to do a different type of install for different versions of the package.

Multi-methods provide a simple decorator-based syntax for this that avoids overly complicated rat nests of if statements. Obviously, depending on the scenario, regular old conditionals might be clearer, so package authors should use their judgement.

exception spack.multimethod.MultiMethodError(message)

Bases: spack.error.SpackError

Superclass for multimethod dispatch errors

class spack.multimethod.MultiMethodMeta(name, bases, attr_dict)

Bases: type

This allows us to track the class’s dict during instantiation.

exception spack.multimethod.NoSuchMethodError(cls, method_name, spec, possible_specs)

Bases: spack.error.SpackError

Raised when we can’t find a version of a multi-method.

class spack.multimethod.SpecMultiMethod(default=None)

Bases: object

This implements a multi-method for Spack specs. Packages are instantiated with a particular spec, and you may want to execute different versions of methods based on what the spec looks like. For example, you might want to call a different version of install() for one platform than you call on another.

The SpecMultiMethod class implements a callable object that handles method dispatch. When it is called, it looks through registered methods and their associated specs, and it tries to find one that matches the package’s spec. If it finds one (and only one), it will call that method.

This is intended for use with decorators (see below). The decorator (see docs below) creates SpecMultiMethods and registers method versions with them.

To register a method, you can do something like this:
mm = SpecMultiMethod() mm.register(“^chaos_5_x86_64_ib”, some_method)

The object registered needs to be a Spec or some string that will parse to be a valid spec.

When the mm is actually called, it selects a version of the method to call based on the sys_type of the object it is called on.

See the docs for decorators below for more details.

register(spec, method)

Register a version of a method for a particular spec.

class spack.multimethod.when(condition)

Bases: object

This annotation lets packages declare multiple versions of methods like install() that depend on the package’s spec. For example:

class SomePackage(Package):
    ...

    def install(self, prefix):
        # Do default install

    @when('target=x86_64:')
    def install(self, prefix):
        # This will be executed instead of the default install if
        # the package's target is in the x86_64 family.

    @when('target=ppc64:')
    def install(self, prefix):
        # This will be executed if the package's target is in
        # the ppc64 family

This allows each package to have a default version of install() AND specialized versions for particular platforms. The version that is called depends on the architecutre of the instantiated package.

Note that this works for methods other than install, as well. So, if you only have part of the install that is platform specific, you could do this:

class SomePackage(Package):
    ...
    # virtual dependence on MPI.
    # could resolve to mpich, mpich2, OpenMPI
    depends_on('mpi')

    def setup(self):
        # do nothing in the default case
        pass

    @when('^openmpi')
    def setup(self):
        # do something special when this is built with OpenMPI for
        # its MPI implementations.


    def install(self, prefix):
        # Do common install stuff
        self.setup()
        # Do more common install stuff

Note that the default version of decorated methods must always come first. Otherwise it will override all of the platform-specific versions. There’s not much we can do to get around this because of the way decorators work.

spack.package module

This is where most of the action happens in Spack.

The spack package class structure is based strongly on Homebrew (http://brew.sh/), mainly because Homebrew makes it very easy to create packages.

exception spack.package.ActivationError(msg, long_msg=None)

Bases: spack.package.ExtensionError

Raised when there are problems activating an extension.

class spack.package.BundlePackage(spec)

Bases: spack.package.PackageBase

General purpose bundle, or no-code, package class.

build_system_class = 'BundlePackage'

This attribute is used in UI queries that require to know which build-system class we are using

has_code = False

Bundle packages do not have associated source or binary code.

phases = []

There are no phases by default but the property is required to support post-install hooks (e.g., for module generation).

exception spack.package.DependencyConflictError(conflict)

Bases: spack.error.SpackError

Raised when the dependencies cannot be flattened as asked for.

exception spack.package.ExtensionError(message, long_msg=None)

Bases: spack.package.PackageError

Superclass for all errors having to do with extension packages.

exception spack.package.FetchError(message, long_msg=None)

Bases: spack.error.SpackError

Raised when something goes wrong during fetch.

class spack.package.InstallPhase(name)

Bases: object

Manages a single phase of the installation.

This descriptor stores at creation time the name of the method it should search for execution. The method is retrieved at __get__ time, so that it can be overridden by subclasses of whatever class declared the phases.

It also provides hooks to execute arbitrary callbacks before and after the phase.

copy()
exception spack.package.InvalidPackageOpError(message, long_msg=None)

Bases: spack.package.PackageError

Raised when someone tries perform an invalid operation on a package.

exception spack.package.NoURLError(cls)

Bases: spack.package.PackageError

Raised when someone tries to build a URL for a package with no URLs.

class spack.package.Package(spec)

Bases: spack.package.PackageBase

General purpose class with a single install phase that needs to be coded by packagers.

build_system_class = 'Package'

This attribute is used in UI queries that require to know which build-system class we are using

phases = ['install']

The one and only phase

class spack.package.PackageBase(spec)

Bases: spack.package.PackageViewMixin, object

This is the superclass for all spack packages.

*The Package class*

At its core, a package consists of a set of software to be installed. A package may focus on a piece of software and its associated software dependencies or it may simply be a set, or bundle, of software. The former requires defining how to fetch, verify (via, e.g., sha256), build, and install that software and the packages it depends on, so that dependencies can be installed along with the package itself. The latter, sometimes referred to as a no-source package, requires only defining the packages to be built.

Packages are written in pure Python.

There are two main parts of a Spack package:

  1. The package class. Classes contain directives, which are special functions, that add metadata (versions, patches, dependencies, and other information) to packages (see directives.py). Directives provide the constraints that are used as input to the concretizer.
  2. Package instances. Once instantiated, a package is essentially a software installer. Spack calls methods like do_install() on the Package object, and it uses those to drive user-implemented methods like patch(), install(), and other build steps. To install software, an instantiated package needs a concrete spec, which guides the behavior of the various install methods.

Packages are imported from repos (see repo.py).

Package DSL

Look in lib/spack/docs or check https://spack.readthedocs.io for the full documentation of the package domain-specific language. That used to be partially documented here, but as it grew, the docs here became increasingly out of date.

Package Lifecycle

A package’s lifecycle over a run of Spack looks something like this:

p = Package()             # Done for you by spack

p.do_fetch()              # downloads tarball from a URL (or VCS)
p.do_stage()              # expands tarball in a temp directory
p.do_patch()              # applies patches to expanded source
p.do_install()            # calls package's install() function
p.do_uninstall()          # removes install directory

although packages that do not have code have nothing to fetch so omit p.do_fetch().

There are also some other commands that clean the build area:

p.do_clean()              # removes the stage directory entirely
p.do_restage()            # removes the build directory and
                          # re-expands the archive.

The convention used here is that a do_* function is intended to be called internally by Spack commands (in spack.cmd). These aren’t for package writers to override, and doing so may break the functionality of the Package class.

Package creators have a lot of freedom, and they could technically override anything in this class. That is not usually required.

For most use cases. Package creators typically just add attributes like homepage and, for a code-based package, url, or functions such as install(). There are many custom Package subclasses in the spack.build_systems package that make things even easier for specific build systems.

activate(extension, view, **kwargs)

Add the extension to the specified view.

Package authors can override this function to maintain some centralized state related to the set of activated extensions for a package.

Spack internals (commands, hooks, etc.) should call do_activate() method so that proper checks are always executed.

classmethod all_patches()

Retrieve all patches associated with the package.

Retrieves patches on the package itself as well as patches on the dependencies of the package.

all_urls

A list of all URLs in a package.

Check both class-level and version-specific URLs.

Returns:a list of URLs
Return type:list
architecture

Get the spack.architecture.Arch object that represents the environment in which this package will be built.

archive_files = []

List of glob expressions. Each expression must either be absolute or relative to the package source path. Matching artifacts found at the end of the build process will be copied in the same directory tree as _spack_build_logfile and _spack_build_envfile.

build_log_path

Return the expected (or current) build log file path. The path points to the staging build file until the software is successfully installed, when it points to the file in the installation directory.

classmethod build_system_flags(name, flags)

flag_handler that passes flags to the build system arguments. Any package using build_system_flags must also implement flags_to_build_system_args, or derive from a class that implements it. Currently, AutotoolsPackage and CMakePackage implement it.

build_time_test_callbacks = None

A list or set of build time test functions to be called when tests are executed or ‘None’ if there are no such test functions.

compiler

Get the spack.compiler.Compiler object used to build this package

configure_args_path

Return the configure args file path associated with staging.

content_hash(content=None)

Create a hash based on the sources and logic used to build the package. This includes the contents of all applied patches and the contents of applicable functions in the package subclass.

deactivate(extension, view, **kwargs)

Remove all extension files from the specified view.

Package authors can override this method to support other extension mechanisms. Spack internals (commands, hooks, etc.) should call do_deactivate() method so that proper checks are always executed.

dependencies_of_type(*deptypes)

Get dependencies that can possibly have these deptypes.

This analyzes the package and determines which dependencies can be a certain kind of dependency. Note that they may not always be this kind of dependency, since dependencies can be optional, so something may be a build dependency in one configuration and a run dependency in another.

dependency_activations()
do_activate(view=None, with_dependencies=True, verbose=True)

Called on an extension to invoke the extendee’s activate method.

Commands should call this routine, and should not call activate() directly.

do_clean()

Removes the package’s build stage and source tarball.

do_deactivate(view=None, **kwargs)

Remove this extension package from the specified view. Called on the extension to invoke extendee’s deactivate() method.

remove_dependents=True deactivates extensions depending on this package instead of raising an error.

do_deprecate(deprecator, link_fn)

Deprecate this package in favor of deprecator spec

do_fetch(mirror_only=False)

Creates a stage directory and downloads the tarball for this package. Working directory will be set to the stage directory.

do_install(**kwargs)

Called by commands to install a package and or its dependencies.

Package implementations should override install() to describe their build process.

Parameters:
  • cache_only (bool) – Fail if binary package unavailable.
  • dirty (bool) – Don’t clean the build environment before installing.
  • explicit (bool) – True if package was explicitly installed, False if package was implicitly installed (as a dependency).
  • fail_fast (bool) – Fail if any dependency fails to install; otherwise, the default is to install as many dependencies as possible (i.e., best effort installation).
  • fake (bool) – Don’t really build; install fake stub files instead.
  • force (bool) – Install again, even if already installed.
  • install_deps (bool) – Install dependencies before installing this package
  • install_source (bool) – By default, source is not installed, but for debugging it might be useful to keep it around.
  • keep_prefix (bool) – Keep install prefix on failure. By default, destroys it.
  • keep_stage (bool) – By default, stage is destroyed only if there are no exceptions during build. Set to True to keep the stage even with exceptions.
  • restage (bool) – Force spack to restage the package source.
  • skip_patch (bool) – Skip patch stage of build if True.
  • stop_before (InstallPhase) – stop execution before this installation phase (or None)
  • stop_at (InstallPhase) – last installation phase to be executed (or None)
  • tests (bool or list or set) – False to run no tests, True to test all packages, or a list of package names to run tests for some
  • use_cache (bool) – Install from binary package, if available.
  • verbose (bool) – Display verbose build output (by default, suppresses it)
do_patch()

Applies patches if they haven’t been applied already.

do_restage()

Reverts expanded/checked out source to a pristine state.

do_stage(mirror_only=False)

Unpacks and expands the fetched tarball.

do_uninstall(force=False)

Uninstall this package by spec.

classmethod env_flags(name, flags)

flag_handler that adds all flags to canonical environment variables.

env_path

Return the build environment file path associated with staging.

extendable = False

Most packages are NOT extendable. Set to True if you want extensions.

extendee_args

Spec of the extendee of this package, or None if it is not an extension

extendee_spec

Spec of the extendee of this package, or None if it is not an extension

extends(spec)

Returns True if this package extends the given spec.

If self.spec is concrete, this returns whether this package extends the given spec.

If self.spec is not concrete, this returns whether this package may extend the given spec.

fetch_options = {}

Set of additional options used when fetching package versions.

fetch_remote_versions(concurrency=128)

Find remote versions of this package.

Uses list_url and any other URLs listed in the package file.

Returns:a dictionary mapping versions to URLs
Return type:dict
fetcher
classmethod flag_handler(name, flags)

flag_handler that injects all flags through the compiler wrapper.

flags_to_build_system_args(flags)
format_doc(**kwargs)

Wrap doc string at 72 characters and format nicely

fullname = 'spack.package'
global_license_dir

Returns the directory where global license files for all packages are stored.

global_license_file

Returns the path where a global license file for this particular package should be stored.

has_code = True

Most Spack packages are used to install source or binary code while those that do not can be used to install a set of other Spack packages.

classmethod inject_flags(name, flags)

flag_handler that injects all flags through the compiler wrapper.

install_configure_args_path

Return the configure args file path on successful installation.

install_env_path

Return the build environment file path on successful installation.

install_log_path

Return the build log file path on successful installation.

install_time_test_callbacks = None

A list or set of install time test functions to be called when tests are executed or ‘None’ if there are no such test functions.

installed

Installation status of a package.

Returns:True if the package has been installed, False otherwise.
installed_upstream
is_activated(view)

Return True if package is activated.

is_extension
license_comment = '#'

String. Contains the symbol used by the license manager to denote a comment. Defaults to #.

license_files = []

List of strings. These are files that the software searches for when looking for a license. All file paths must be relative to the installation directory. More complex packages like Intel may require multiple licenses for individual components. Defaults to the empty list.

license_required = False

Boolean. If set to True, this software requires a license. If set to False, all of the license_* attributes will be ignored. Defaults to False.

license_url = ''

String. A URL pointing to license setup instructions for the software. Defaults to the empty string.

license_vars = []

List of strings. Environment variables that can be set to tell the software where to look for a license if it is not in the usual location. Defaults to the empty list.

log_path

Return the build log file path associated with staging.

maintainers = []

List of strings which contains GitHub usernames of package maintainers. Do not include @ here in order not to unnecessarily ping the users.

manual_download = False

Boolean. Set to True for packages that require a manual download. This is currently only used by package sanity tests.

metadata_attrs = ['homepage', 'url', 'urls', 'list_url', 'extendable', 'parallel', 'make_jobs']

List of attributes to be excluded from a package’s hash.

module = <module 'spack.package' from '/home/docs/checkouts/readthedocs.org/user_builds/spack/checkouts/v0.15.3/lib/spack/docs/_spack_root/lib/spack/spack/package.py'>
name = 'package'
namespace = 'spack'
nearest_url(version)

Finds the URL with the “closest” version to version.

This uses the following precedence order:

  1. Find the next lowest or equal version with a URL.
  2. If no lower URL, return the next higher URL.
  3. If no higher URL, return None.
package_dir = '/home/docs/checkouts/readthedocs.org/user_builds/spack/checkouts/v0.15.3/lib/spack/docs/_spack_root/lib/spack/spack'
parallel = True

By default we build in parallel. Subclasses can override this.

classmethod possible_dependencies(transitive=True, expand_virtuals=True, deptype='all', visited=None, missing=None)

Return dict of possible dependencies of this package.

Parameters:
  • transitive (bool, optional) – return all transitive dependencies if True, only direct dependencies if False (default True)..
  • expand_virtuals (bool, optional) – expand virtual dependencies into all possible implementations (default True)
  • deptype (str or tuple, optional) – dependency types to consider
  • visited (dicct, optional) – dict of names of dependencies visited so far, mapped to their immediate dependencies’ names.
  • missing (dict, optional) – dict to populate with packages and their missing dependencies.
Returns:

dictionary mapping dependency names to their

immediate dependencies

Return type:

(dict)

Each item in the returned dictionary maps a (potentially transitive) dependency of this package to its possible immediate dependencies. If expand_virtuals is False, virtual package names wil be inserted as keys mapped to empty sets of dependencies. Virtuals, if not expanded, are treated as though they have no immediate dependencies.

Missing dependencies by default are ignored, but if a missing dict is provided, it will be populated with package names mapped to any dependencies they have that are in no repositories. This is only populated if transitive is True.

Note: the returned dict includes the package itself.

prefix

Get the prefix into which this package should be installed.

provides(vpkg_name)

True if this package provides a virtual package with the specified name

remove_prefix()

Removes the prefix for a package along with any empty parent directories

rpath

Get the rpath this package links with, as a list of paths.

rpath_args

Get the rpath args as a string, with -Wl,-rpath, for each element

run_tests = False

By default do not run tests within package’s install()

sanity_check_is_dir = []

List of prefix-relative directory paths (or a single path). If these do not exist after install, or if they exist but are not directories, sanity checks will fail.

sanity_check_is_file = []

List of prefix-relative file paths (or a single path). If these do not exist after install, or if they exist but are not files, sanity checks fail.

sanity_check_prefix()

This function checks whether install succeeded.

setup_build_environment(env)

Sets up the build environment for a package.

This method will be called before the current package prefix exists in Spack’s store.

Parameters:env (EnvironmentModifications) – environment modifications to be applied when the package is built. Package authors can call methods on it to alter the build environment.
setup_dependent_build_environment(env, dependent_spec)

Sets up the build environment of packages that depend on this one.

This is similar to setup_build_environment, but it is used to modify the build environments of packages that depend on this one.

This gives packages like Python and others that follow the extension model a way to implement common environment or compile-time settings for dependencies.

This method will be called before the dependent package prefix exists in Spack’s store.

Examples

1. Installing python modules generally requires PYTHONPATH to point to the lib/pythonX.Y/site-packages directory in the module’s install prefix. This method could be used to set that variable.

Parameters:
  • env (EnvironmentModifications) – environment modifications to be applied when the dependent package is built. Package authors can call methods on it to alter the build environment.
  • dependent_spec (Spec) – the spec of the dependent package about to be built. This allows the extendee (self) to query the dependent’s state. Note that this package’s spec is available as self.spec
setup_dependent_package(module, dependent_spec)

Set up Python module-scope variables for dependent packages.

Called before the install() method of dependents.

Default implementation does nothing, but this can be overridden by an extendable package to set up the module of its extensions. This is useful if there are some common steps to installing all extensions for a certain package.

Examples:

  1. Extensions often need to invoke the python interpreter from the Python installation being extended. This routine can put a python() Executable object in the module scope for the extension package to simplify extension installs.
  2. MPI compilers could set some variables in the dependent’s scope that point to mpicc, mpicxx, etc., allowing them to be called by common name regardless of which MPI is used.
  3. BLAS/LAPACK implementations can set some variables indicating the path to their libraries, since these paths differ by BLAS/LAPACK implementation.
Parameters:
  • module (spack.package.PackageBase.module) – The Python module object of the dependent package. Packages can use this to set module-scope variables for the dependent to use.
  • dependent_spec (Spec) – The spec of the dependent package about to be built. This allows the extendee (self) to query the dependent’s state. Note that this package’s spec is available as self.spec.
setup_dependent_run_environment(env, dependent_spec)

Sets up the run environment of packages that depend on this one.

This is similar to setup_run_environment, but it is used to modify the run environments of packages that depend on this one.

This gives packages like Python and others that follow the extension model a way to implement common environment or run-time settings for dependencies.

Parameters:
  • env (EnvironmentModifications) – environment modifications to be applied when the dependent package is run. Package authors can call methods on it to alter the build environment.
  • dependent_spec (Spec) – The spec of the dependent package about to be run. This allows the extendee (self) to query the dependent’s state. Note that this package’s spec is available as self.spec
setup_run_environment(env)

Sets up the run environment for a package.

Parameters:env (EnvironmentModifications) – environment modifications to be applied when the package is run. Package authors can call methods on it to alter the run environment.
stage

Get the build staging area for this package.

This automatically instantiates a Stage object if the package doesn’t have one yet, but it does not create the Stage directory on the filesystem.

transitive_rpaths = True

When True, add RPATHs for the entire DAG. When False, add RPATHs only for immediate dependencies.

static uninstall_by_spec(spec, force=False, deprecator=None)
unit_test_check()

Hook for unit tests to assert things about package internals.

Unit tests can override this function to perform checks after Package.install and all post-install hooks run, but before the database is updated.

The overridden function may indicate that the install procedure should terminate early (before updating the database) by returning False (or any value such that bool(result) is False).

Returns:True to continue, False to skip install()
Return type:(bool)
url_for_version(version)

Returns a URL from which the specified version of this package may be downloaded.

version: class Version
The version for which a URL is sought.

See Class Version (version.py)

url_version(version)

Given a version, this returns a string that should be substituted into the package’s URL to download that version.

By default, this just returns the version string. Subclasses may need to override this, e.g. for boost versions where you need to ensure that there are _’s in the download URL.

use_xcode = False

By default do not setup mockup XCode on macOS with Clang

version
version_urls()

OrderedDict of explicitly defined URLs for versions of this package.

Returns:An OrderedDict (version -> URL) different versions of this package, sorted by version.

A version’s URL only appears in the result if it has an an explicitly defined url argument. So, this list may be empty if a package only defines url at the top level.

view()

Create a view with the prefix of this package as the root. Extensions added to this view will modify the installation prefix of this package.

virtuals_provided

virtual packages provided by this package with its spec

exception spack.package.PackageError(message, long_msg=None)

Bases: spack.error.SpackError

Raised when something is wrong with a package definition.

class spack.package.PackageMeta(name, bases, attr_dict)

Bases: spack.directives.DirectiveMeta, spack.mixins.PackageMixinsMeta, spack.multimethod.MultiMethodMeta

Package metaclass for supporting directives (e.g., depends_on) and phases

fullname

Name of this package, including the namespace

module

Module object (not just the name) that this package is defined in.

We use this to add variables to package modules. This makes install() methods easier to write (e.g., can call configure())

name

The name of this package.

The name of a package is the name of its Python module, without the containing module names.

namespace

Spack namespace for the package, which identifies its repo.

package_dir

Directory where the package.py file lives.

phase_fmt = '_InstallPhase_{0}'
static register_callback(check_type, *phases)
exception spack.package.PackageStillNeededError(spec, dependents)

Bases: spack.installer.InstallError

Raised when package is still needed by another on uninstall.

exception spack.package.PackageVersionError(version)

Bases: spack.package.PackageError

Raised when a version URL cannot automatically be determined.

class spack.package.PackageViewMixin

Bases: object

This collects all functionality related to adding installed Spack package to views. Packages can customize how they are added to views by overriding these functions.

add_files_to_view(view, merge_map)

Given a map of package files to destination paths in the view, add the files to the view. By default this adds all files. Alternative implementations may skip some files, for example if other packages linked into the view already include the file.

remove_files_from_view(view, merge_map)

Given a map of package files to files currently linked in the view, remove the files from the view. The default implementation removes all files. Alternative implementations may not remove all files. For example if two packages include the same file, it should only be removed when both packages are removed.

view_destination(view)

The target root directory: each file is added relative to this directory.

view_file_conflicts(view, merge_map)

Report any files which prevent adding this package to the view. The default implementation looks for any files which already exist. Alternative implementations may allow some of the files to exist in the view (in this case they would be omitted from the results).

view_source()

The source root directory that will be added to the view: files are added such that their path relative to the view destination matches their path relative to the view source.

spack.package.flatten_dependencies(spec, flat_dir)

Make each dependency of spec present in dir via symlink.

Execute a dummy install and flatten dependencies.

This routine can be used in a package.py definition by setting install = install_dependency_symlinks.

This feature comes in handy for creating a common location for the the installation of third-party libraries.

spack.package.on_package_attributes(**attr_dict)

Decorator: executes instance function only if object has attr valuses.

Executes the decorated method only if at the moment of calling the instance has attributes that are equal to certain values.

Parameters:attr_dict (dict) – dictionary mapping attribute names to their required values
spack.package.possible_dependencies(*pkg_or_spec, **kwargs)

Get the possible dependencies of a number of packages.

See PackageBase.possible_dependencies for details.

spack.package.run_after(*phases)

Registers a method of a package to be run after a given phase

spack.package.run_before(*phases)

Registers a method of a package to be run before a given phase

spack.package.use_cray_compiler_names()

Compiler names for builds that rely on cray compiler names.

spack.package_prefs module

class spack.package_prefs.PackagePrefs(pkgname, component, vpkg=None)

Bases: object

Defines the sort order for a set of specs.

Spack’s package preference implementation uses PackagePrefss to define sort order. The PackagePrefs class looks at Spack’s packages.yaml configuration and, when called on a spec, returns a key that can be used to sort that spec in order of the user’s preferences.

You can use it like this:

# key function sorts CompilerSpecs for mpich in order of preference kf = PackagePrefs(‘mpich’, ‘compiler’) compiler_list.sort(key=kf)

Or like this:

# key function to sort VersionLists for OpenMPI in order of preference. kf = PackagePrefs(‘openmpi’, ‘version’) version_list.sort(key=kf)

Optionally, you can sort in order of preferred virtual dependency providers. To do that, provide ‘providers’ and a third argument denoting the virtual package (e.g., mpi):

kf = PackagePrefs(‘trilinos’, ‘providers’, ‘mpi’) provider_spec_list.sort(key=kf)
classmethod has_preferred_providers(pkgname, vpkg)

Whether specific package has a preferred vpkg providers.

classmethod has_preferred_targets(pkg_name)

Whether specific package has a preferred vpkg providers.

classmethod order_for_package(pkgname, component, vpkg=None, all=True)

Given a package name, sort component (e.g, version, compiler, …), and an optional vpkg, return the list from the packages config.

classmethod preferred_variants(pkg_name)

Return a VariantMap of preferred variants/values for a spec.

exception spack.package_prefs.VirtualInPackagesYAMLError(message, long_message=None)

Bases: spack.error.SpackError

Raised when a disallowed virtual is found in packages.yaml

spack.package_prefs.get_package_dir_permissions(spec)

Return the permissions configured for the spec.

Include the GID bit if group permissions are on. This makes the group attribute sticky for the directory. Package-specific settings take precedent over settings for all

spack.package_prefs.get_package_group(spec)

Return the unix group associated with the spec.

Package-specific settings take precedence over settings for all

spack.package_prefs.get_package_permissions(spec)

Return the permissions configured for the spec.

Package-specific settings take precedence over settings for all

spack.package_prefs.is_spec_buildable(spec)

Return true if the spec pkgspec is configured as buildable

spack.package_prefs.spec_externals(spec)

Return a list of external specs (w/external directory path filled in), one for each known external installation.

spack.package_test module

spack.package_test.compare_output(current_output, blessed_output)

Compare blessed and current output of executables.

spack.package_test.compare_output_file(current_output, blessed_output_file)

Same as above, but when the blessed output is given as a file.

spack.package_test.compile_c_and_execute(source_file, include_flags, link_flags)

Compile C @p source_file with @p include_flags and @p link_flags, run and return the output.

spack.parse module

exception spack.parse.LexError(message, string, pos)

Bases: spack.parse.ParseError

Raised when we don’t know how to lex something.

class spack.parse.Lexer(lexicon0, mode_switches_01=[], lexicon1=[], mode_switches_10=[])

Bases: object

Base class for Lexers that keep track of line numbers.

lex(text)
lex_word(word)
token(type, value='')
exception spack.parse.ParseError(message, string, pos)

Bases: spack.error.SpackError

Raised when we don’t hit an error while parsing.

class spack.parse.Parser(lexer)

Bases: object

Base class for simple recursive descent parsers.

accept(id)

Put the next symbol in self.token if accepted, then call gettok()

expect(id)

Like accept(), but fails if we don’t like the next token.

gettok()

Puts the next token in the input stream into self.next.

last_token_error(message)

Raise an error about the previous token in the stream.

next_token_error(message)

Raise an error about the next token in the stream.

parse(text)
push_tokens(iterable)

Adds all tokens in some iterable to the token stream.

setup(text)
unexpected_token()
class spack.parse.Token(type, value='', start=0, end=0)

Bases: object

Represents tokens; generated from input by lexer and fed to parse().

is_a(type)

spack.patch module

class spack.patch.FilePatch(pkg, relative_path, level, working_dir, ordering_key=None)

Bases: spack.patch.Patch

Describes a patch that is retrieved from a file in the repository.

Parameters:
  • pkg (str) – the class object for the package that owns the patch
  • relative_path (str) – path to patch, relative to the repository directory for a package.
  • level (int) – level to pass to patch command
  • working_dir (str) – path within the source directory where patch should be applied
sha256
to_dict()

Partial dictionary – subclases should add to this.

exception spack.patch.NoSuchPatchError(message, long_message=None)

Bases: spack.error.SpackError

Raised when a patch file doesn’t exist.

class spack.patch.Patch(pkg, path_or_url, level, working_dir)

Bases: object

Base class for patches.

Parameters:pkg (str) – the package that owns the patch

The owning package is not necessarily the package to apply the patch to – in the case where a dependent package patches its dependency, it is the dependent’s fullname.

apply(stage)

Apply a patch to source in a stage.

Parameters:stage (spack.stage.Stage) – stage where source code lives
clean()

Clean up the patch stage in case of a UrlPatch

fetch()

Fetch the patch in case of a UrlPatch

stage
to_dict()

Partial dictionary – subclases should add to this.

class spack.patch.PatchCache(data=None)

Bases: object

Index of patches used in a repository, by sha256 hash.

This allows us to look up patches without loading all packages. It’s also needed to properly implement dependency patching, as need a way to look up patches that come from packages not in the Spec sub-DAG.

The patch index is structured like this in a file (this is YAML, but we write JSON):

patches:
    sha256:
        namespace1.package1:
            <patch json>
        namespace2.package2:
            <patch json>
        ... etc. ...
classmethod from_json(stream)
patch_for_package(sha256, pkg)

Look up a patch in the index and build a patch object for it.

Parameters:
  • sha256 (str) – sha256 hash to look up
  • pkg (spack.package.Package) – Package object to get patch for.

We build patch objects lazily because building them requires that we have information about the package’s location in its repo.

to_json(stream)
update(other)

Update this cache with the contents of another.

update_package(pkg_fullname)
exception spack.patch.PatchDirectiveError(message, long_message=None)

Bases: spack.error.SpackError

Raised when the wrong arguments are suppled to the patch directive.

class spack.patch.UrlPatch(pkg, url, level=1, working_dir='.', ordering_key=None, **kwargs)

Bases: spack.patch.Patch

Describes a patch that is retrieved from a URL.

Parameters:
  • pkg (str) – the package that owns the patch
  • url (str) – URL where the patch can be fetched
  • level (int) – level to pass to patch command
  • working_dir (str) – path within the source directory where patch should be applied
clean()

Clean up the patch stage in case of a UrlPatch

fetch()

Retrieve the patch in a temporary stage and compute self.path

Parameters:stage – stage for the package that needs to be patched
stage
to_dict()

Partial dictionary – subclases should add to this.

spack.patch.apply_patch(stage, patch_path, level=1, working_dir='.')

Apply the patch at patch_path to code in the stage.

Parameters:
  • stage (spack.stage.Stage) – stage with code that will be patched
  • patch_path (str) – filesystem location for the patch to apply
  • level (int, optional) – patch level (default 1)
  • working_dir (str) – relative path within the stage to change to (default ‘.’)
spack.patch.from_dict(dictionary)

Create a patch from json dictionary.

spack.paths module

Defines paths that are part of Spack’s directory structure.

Do not import other spack modules here. This module is used throughout Spack and should bring in a minimal number of external dependencies.

spack.paths.bin_path = '/home/docs/checkouts/readthedocs.org/user_builds/spack/checkouts/v0.15.3/lib/spack/docs/_spack_root/bin'

bin directory in the spack prefix

spack.paths.prefix = '/home/docs/checkouts/readthedocs.org/user_builds/spack/checkouts/v0.15.3/lib/spack/docs/_spack_root'

This file lives in $prefix/lib/spack/spack/__file__

spack.paths.spack_root = '/home/docs/checkouts/readthedocs.org/user_builds/spack/checkouts/v0.15.3/lib/spack/docs/_spack_root'

synonym for prefix

spack.paths.spack_script = '/home/docs/checkouts/readthedocs.org/user_builds/spack/checkouts/v0.15.3/lib/spack/docs/_spack_root/bin/spack'

The spack script itself

spack.paths.user_config_path = '/home/docs/.spack'

User configuration location

spack.pkgkit module

pkgkit is a set of useful build tools and directives for packages.

Everything in this module is automatically imported into Spack package files.

spack.projections module

spack.projections.get_projection(projections, spec)

Get the projection for a spec from a projections dict.

spack.provider_index module

Classes and functions to manage providers of virtual dependencies

class spack.provider_index.ProviderIndex(specs=None, restrict=False)

Bases: spack.provider_index._IndexBase

copy()

Return a deep copy of this index.

static from_json(stream)

Construct a provider index from its JSON representation.

Parameters:stream – stream where to read from the JSON data
merge(other)

Merge another provider index into this one.

Parameters:other (ProviderIndex) – provider index to be merged
remove_provider(pkg_name)

Remove a provider from the ProviderIndex.

to_json(stream=None)

Dump a JSON representation of this object.

Parameters:stream – stream where to dump
update(spec)

Update the provider index with additional virtual specs.

Parameters:spec – spec potentially providing additional virtual specs
exception spack.provider_index.ProviderIndexError(message, long_message=None)

Bases: spack.error.SpackError

Raised when there is a problem with a ProviderIndex.

spack.relocate module

exception spack.relocate.BinaryStringReplacementError(file_path, old_len, new_len)

Bases: spack.error.SpackError

exception spack.relocate.BinaryTextReplaceError(old_path, new_path)

Bases: spack.error.SpackError

exception spack.relocate.InstallRootStringError(file_path, root_path)

Bases: spack.error.SpackError

spack.relocate.file_is_relocatable(file, paths_to_relocate=None)

Returns True if the file passed as argument is relocatable.

Parameters:file – absolute path of the file to be analyzed
Returns:True or false
Raises:ValueError – if the file does not exist or the path is not absolute
spack.relocate.is_binary(file)

Returns true if a file is binary, False otherwise

Parameters:file – file to be tested
Returns:True or False
spack.relocate.is_relocatable(spec)

Returns True if an installed spec is relocatable.

Parameters:spec (Spec) – spec to be analyzed
Returns:True if the binaries of an installed spec are relocatable and False otherwise.
Raises:ValueError – if the spec is not installed
spack.relocate.macho_find_paths(orig_rpaths, deps, idpath, old_layout_root, prefix_to_prefix)

Inputs original rpaths from mach-o binaries dependency libraries for mach-o binaries id path of mach-o libraries old install directory layout root prefix_to_prefix dictionary which maps prefixes in the old directory layout to directories in the new directory layout Output paths_to_paths dictionary which maps all of the old paths to new paths

spack.relocate.macho_make_paths_normal(orig_path_name, rpaths, deps, idpath)

Return a dictionary mapping the relativized rpaths to the original rpaths. This dictionary is used to replace paths in mach-o binaries. Replace @loader_path’ with the dirname of the origname path name in rpaths and deps; idpath is replaced with the original path name

spack.relocate.macho_make_paths_relative(path_name, old_layout_root, rpaths, deps, idpath)

Return a dictionary mapping the original rpaths to the relativized rpaths. This dictionary is used to replace paths in mach-o binaries. Replace old_dir with relative path from dirname of path name in rpaths and deps; idpath is replaced with @rpath/libname.

spack.relocate.macholib_get_paths(cur_path)

Get rpaths, dependencies and id of mach-o objects using python macholib package

spack.relocate.make_elf_binaries_relative(new_binaries, orig_binaries, orig_layout_root)

Replace the original RPATHs in the new binaries making them relative to the original layout root.

Parameters:
  • new_binaries (list) – new binaries whose RPATHs is to be made relative
  • orig_binaries (list) – original binaries
  • orig_layout_root (str) – path to be used as a base for making RPATHs relative

Compute the relative target from the original link and make the new link relative.

Parameters:
  • new_links (list) – new links to be made relative
  • orig_links (list) – original links
spack.relocate.make_macho_binaries_relative(cur_path_names, orig_path_names, old_layout_root)

Replace old RPATHs with paths relative to old_dir in binary files

spack.relocate.mime_type(file)

Returns the mime type and subtype of a file.

Parameters:file – file to be analyzed
Returns:Tuple containing the MIME type and subtype
spack.relocate.modify_macho_object(cur_path, rpaths, deps, idpath, paths_to_paths)

This function is used to make machO buildcaches on macOS by replacing old paths with new paths using install_name_tool Inputs: mach-o binary to be modified original rpaths original dependency paths original id path if a mach-o library dictionary mapping paths in old install layout to new install layout

spack.relocate.modify_object_macholib(cur_path, paths_to_paths)

This function is used when install machO buildcaches on linux by rewriting mach-o loader commands for dependency library paths of mach-o binaries and the id path for mach-o libraries. Rewritting of rpaths is handled by replace_prefix_bin. Inputs mach-o binary to be modified dictionary mapping paths in old install layout to new install layout

spack.relocate.needs_binary_relocation(m_type, m_subtype)

Returns True if the file with MIME type/subtype passed as arguments needs binary relocation, False otherwise.

Parameters:
  • m_type (str) – MIME type of the file
  • m_subtype (str) – MIME subtype of the file
spack.relocate.needs_text_relocation(m_type, m_subtype)

Returns True if the file with MIME type/subtype passed as arguments needs text relocation, False otherwise.

Parameters:
  • m_type (str) – MIME type of the file
  • m_subtype (str) – MIME subtype of the file
spack.relocate.raise_if_not_relocatable(binaries, allow_root)

Raise an error if any binary in the list is not relocatable.

Parameters:
  • binaries (list) – list of binaries to check
  • allow_root (bool) – whether root dir is allowed or not in a binary
Raises:

InstallRootStringError – if the file is not relocatable

spack.relocate.relocate_elf_binaries(binaries, orig_root, new_root, new_prefixes, rel, orig_prefix, new_prefix)

Relocate the binaries passed as arguments by changing their RPATHs.

Use patchelf to get the original RPATHs and then replace them with rpaths in the new directory layout.

New RPATHs are determined from a dictionary mapping the prefixes in the old directory layout to the prefixes in the new directory layout if the rpath was in the old layout root, i.e. system paths are not replaced.

Parameters:
  • binaries (list) – list of binaries that might need relocation, located in the new prefix
  • orig_root (str) – original root to be substituted
  • new_root (str) – new root to be used, only relevant for relative RPATHs
  • new_prefixes (dict) – dictionary that maps the original prefixes to where they should be relocated
  • rel (bool) – True if the RPATHs are relative, False if they are absolute
  • orig_prefix (str) – prefix where the executable was originally located
  • new_prefix (str) – prefix where we want to relocate the executable

Relocate links to a new install prefix.

The symbolic links are relative to the original installation prefix. The old link target is read and the placeholder is replaced by the old layout root. If the old link target is in the old install prefix, the new link target is create by replacing the old install prefix with the new install prefix.

Parameters:
  • links (list) – list of links to be relocated
  • orig_layout_root (str) – original layout root
  • orig_install_prefix (str) – install prefix of the original installation
  • new_install_prefix (str) – install prefix where we want to relocate
spack.relocate.relocate_macho_binaries(path_names, old_layout_root, new_layout_root, prefix_to_prefix, rel, old_prefix, new_prefix)

Use macholib python package to get the rpaths, depedent libraries and library identity for libraries from the MachO object. Modify them with the replacement paths queried from the dictionary mapping old layout prefixes to hashes and the dictionary mapping hashes to the new layout prefixes.

spack.relocate.relocate_text(files, orig_layout_root, new_layout_root, orig_install_prefix, new_install_prefix, orig_spack, new_spack, new_prefixes)

Relocate text file from the original installation prefix to the new prefix.

Relocation also affects the the path in Spack’s sbang script.

Parameters:
  • files (list) – text files to be relocated
  • orig_layout_root (str) – original layout root
  • new_layout_root (str) – new layout root
  • orig_install_prefix (str) – install prefix of the original installation
  • new_install_prefix (str) – install prefix where we want to relocate
  • orig_spack (str) – path to the original Spack
  • new_spack (str) – path to the new Spack
  • new_prefixes (dict) – dictionary that maps the original prefixes to where they should be relocated
spack.relocate.relocate_text_bin(binaries, orig_install_prefix, new_install_prefix, orig_spack, new_spack, new_prefixes)

Replace null terminated path strings hard coded into binaries.

The new install prefix must be shorter than the original one.

Parameters:
  • binaries (list) – binaries to be relocated
  • orig_install_prefix (str) – install prefix of the original installation
  • new_install_prefix (str) – install prefix where we want to relocate
  • orig_spack (str) – path to the original Spack
  • new_spack (str) – path to the new Spack
  • new_prefixes (dict) – dictionary that maps the original prefixes to where they should be relocated
Raises:

BinaryTextReplaceError – when the new path in longer than the old path

spack.repo module

exception spack.repo.BadRepoError(message, long_message=None)

Bases: spack.repo.RepoError

Raised when repo layout is invalid.

exception spack.repo.FailedConstructorError(name, exc_type, exc_obj, exc_tb)

Bases: spack.repo.RepoError

Raised when a package’s class constructor fails.

class spack.repo.FastPackageChecker(packages_path)

Bases: collections.abc.Mapping

Cache that maps package names to the stats obtained on the ‘package.py’ files associated with them.

For each repository a cache is maintained at class level, and shared among all instances referring to it. Update of the global cache is done lazily during instance initialization.

last_mtime()
exception spack.repo.IndexError(message, long_message=None)

Bases: spack.repo.RepoError

Raised when there’s an error with an index.

class spack.repo.Indexer

Bases: object

Adaptor for indexes that need to be generated when repos are updated.

create()
needs_update(pkg)

Whether an update is needed when the package file hasn’t changed.

Returns:
True if this package needs its index
updated, False otherwise.
Return type:(bool)

We already automatically update indexes when package files change, but other files (like patches) may change underneath the package file. This method can be used to check additional package-specific files whenever they’re loaded, to tell the RepoIndex to update the index just for that package.

read(stream)

Read this index from a provided file object.

update(pkg_fullname)

Update the index in memory with information about a package.

write(stream)

Write the index to a file object.

exception spack.repo.InvalidNamespaceError(message, long_message=None)

Bases: spack.repo.RepoError

Raised when an invalid namespace is encountered.

spack.repo.NOT_PROVIDED = <object object>

Guaranteed unused default value for some functions.

exception spack.repo.NoRepoConfiguredError(message, long_message=None)

Bases: spack.repo.RepoError

Raised when there are no repositories configured.

class spack.repo.PatchIndexer

Bases: spack.repo.Indexer

Lifecycle methods for patch cache.

needs_update()

Whether an update is needed when the package file hasn’t changed.

Returns:
True if this package needs its index
updated, False otherwise.
Return type:(bool)

We already automatically update indexes when package files change, but other files (like patches) may change underneath the package file. This method can be used to check additional package-specific files whenever they’re loaded, to tell the RepoIndex to update the index just for that package.

read(stream)

Read this index from a provided file object.

update(pkg_fullname)

Update the index in memory with information about a package.

write(stream)

Write the index to a file object.

class spack.repo.ProviderIndexer

Bases: spack.repo.Indexer

Lifecycle methods for virtual package providers.

read(stream)

Read this index from a provided file object.

update(pkg_fullname)

Update the index in memory with information about a package.

write(stream)

Write the index to a file object.

class spack.repo.Repo(root)

Bases: object

Class representing a package repository in the filesystem.

Each package repository must have a top-level configuration file called repo.yaml.

Currently, repo.yaml this must define:

namespace:
A Python namespace where the repository’s packages should live.
all_package_names()

Returns a sorted list of all package names in the Repo.

all_packages()

Iterator over all packages in the repository.

Use this with care, because loading packages is slow.

dirname_for_package_name(pkg_name)

Get the directory name for a particular package. This is the directory that contains its package.py file.

dump_provenance(spec, path)

Dump provenance information for a spec to a particular path.

This dumps the package file and any associated patch files. Raises UnknownPackageError if not found.

exists(pkg_name)

Whether a package with the supplied name exists.

extensions_for(extendee_spec)
filename_for_package_name(pkg_name)

Get the filename for the module we should load for a particular package. Packages for a Repo live in $root/<package_name>/package.py

This will return a proper package.py path even if the package doesn’t exist yet, so callers will need to ensure the package exists before importing.

find_module(fullname, path=None)

Python find_module import hook.

Returns this Repo if it can load the module; None if not.

get(spec)

Returns the package associated with the supplied spec.

get_pkg_class(pkg_name)

Get the class for the package out of its module.

First loads (or fetches from cache) a module for the package. Then extracts the package class from the module according to Spack’s naming convention.

index

Construct the index for this repo lazily.

is_prefix(fullname)

True if fullname is a prefix of this Repo’s namespace.

is_virtual(pkg_name)

True if the package with this name is virtual, False otherwise.

last_mtime()

Time a package file in this repo was last updated.

load_module(fullname)

Python importer load hook.

Tries to load the module; raises an ImportError if it can’t.

packages_with_tags(*tags)
patch_index

Index of patches and packages they’re defined on.

provider_index

A provider index with names specific to this repo.

providers_for(vpkg_spec)
purge()

Clear entire package instance cache.

real_name(import_name)

Allow users to import Spack packages using Python identifiers.

A python identifier might map to many different Spack package names due to hyphen/underscore ambiguity.

Easy example:
num3proxy -> 3proxy
Ambiguous:
foo_bar -> foo_bar, foo-bar
More ambiguous:
foo_bar_baz -> foo_bar_baz, foo-bar-baz, foo_bar-baz, foo-bar_baz
tag_index

Index of tags and which packages they’re defined on.

exception spack.repo.RepoError(message, long_message=None)

Bases: spack.error.SpackError

Superclass for repository-related errors.

class spack.repo.RepoIndex(package_checker, namespace)

Bases: object

Container class that manages a set of Indexers for a Repo.

This class is responsible for checking packages in a repository for updates (using FastPackageChecker) and for regenerating indexes when they’re needed.

Indexers should be added to the RepoIndex using add_index(name, indexer), and they should support the interface defined by Indexer, so that the RepoIndex can read, generate, and update stored indices.

Generated indexes are accessed by name via __getitem__().

add_indexer(name, indexer)

Add an indexer to the repo index.

Parameters:
  • name (str) – name of this indexer
  • indexer (object) – an object that supports create(), read(), write(), and get_index() operations
class spack.repo.RepoPath(*repos)

Bases: object

A RepoPath is a list of repos that function as one.

It functions exactly like a Repo, but it operates on the combined results of the Repos in its list instead of on a single package repository.

Parameters:repos (list) – list Repo objects or paths to put in this RepoPath
all_package_names()

Return all unique package names in all repositories.

all_packages()
dirname_for_package_name(pkg_name)
dump_provenance(spec, path)

Dump provenance information for a spec to a particular path.

This dumps the package file and any associated patch files. Raises UnknownPackageError if not found.

exists(pkg_name)

Whether package with the give name exists in the path’s repos.

Note that virtual packages do not “exist”.

extensions_for(extendee_spec)
filename_for_package_name(pkg_name)
find_module(fullname, path=None)

Implements precedence for overlaid namespaces.

Loop checks each namespace in self.repos for packages, and also handles loading empty containing namespaces.

first_repo()

Get the first repo in precedence order.

get(spec)

Returns the package associated with the supplied spec.

get_pkg_class(pkg_name)

Find a class for the spec’s package and return the class object.

get_repo(namespace, default=<object object>)

Get a repository by namespace.

Parameters:namespace – Look up this namespace in the RepoPath, and return it if found.

Optional Arguments:

default:

If default is provided, return it when the namespace isn’t found. If not, raise an UnknownNamespaceError.
is_virtual(pkg_name)

True if the package with this name is virtual, False otherwise.

last_mtime()

Time a package file in this repo was last updated.

load_module(fullname)

Handles loading container namespaces when necessary.

See Repo for how actual package modules are loaded.

packages_with_tags(*tags)
patch_index

Merged PatchIndex from all Repos in the RepoPath.

provider_index

Merged ProviderIndex from all Repos in the RepoPath.

providers_for(vpkg_spec)
put_first(repo)

Add repo first in the search path.

put_last(repo)

Add repo last in the search path.

remove(repo)

Remove a repo from the search path.

repo_for_pkg(spec)

Given a spec, get the repository for its package.

class spack.repo.SpackNamespace(namespace)

Bases: module

Allow lazy loading of modules.

class spack.repo.TagIndex

Bases: collections.abc.Mapping

Maps tags to list of packages.

static from_json(stream)
to_json(stream)
update_package(pkg_name)

Updates a package in the tag index.

Parameters:pkg_name (str) – name of the package to be removed from the index
class spack.repo.TagIndexer

Bases: spack.repo.Indexer

Lifecycle methods for a TagIndex on a Repo.

read(stream)

Read this index from a provided file object.

update(pkg_fullname)

Update the index in memory with information about a package.

write(stream)

Write the index to a file object.

exception spack.repo.UnknownEntityError(message, long_message=None)

Bases: spack.repo.RepoError

Raised when we encounter a package spack doesn’t have.

exception spack.repo.UnknownNamespaceError(namespace)

Bases: spack.repo.UnknownEntityError

Raised when we encounter an unknown namespace

exception spack.repo.UnknownPackageError(name, repo=None)

Bases: spack.repo.UnknownEntityError

Raised when we encounter a package spack doesn’t have.

spack.repo.additional_repository(repository)

Adds temporarily a repository to the default one.

Parameters:repository – repository to be added
spack.repo.all_package_names()

Convenience wrapper around spack.repo.all_package_names().

spack.repo.autospec(function)

Decorator that automatically converts the first argument of a function to a Spec.

spack.repo.create_or_construct(path, namespace=None)

Create a repository, or just return a Repo if it already exists.

spack.repo.create_repo(root, namespace=None)

Create a new repository in root with the specified namespace.

If the namespace is not provided, use basename of root. Return the canonicalized path and namespace of the created repository.

spack.repo.get(spec)

Convenience wrapper around spack.repo.get().

spack.repo.get_full_namespace(namespace)

Returns the full namespace of a repository, given its relative one.

spack.repo.path = <spack.repo.RepoPath object>

Singleton repo path instance

spack.repo.repo_namespace = 'spack.pkg'

Super-namespace for all packages. Package modules are imported as spack.pkg.<namespace>.<pkg-name>.

spack.repo.set_path(repo)

Set the path singleton to a specific value.

Overwrite path and register it as an importer in sys.meta_path if it is a Repo or RepoPath.

spack.repo.swap(repo_path)

Temporarily use another RepoPath.

spack.report module

Tools to produce reports of spec installations

spack.report.valid_formats = [None, 'junit', 'cdash']

Allowed report formats

class spack.report.collect_info(format_name, args)

Bases: object

Collects information to build a report while installing and dumps it on exit.

If the format name is not None, this context manager decorates PackageInstaller._install_task when entering the context for a PackageBase.do_install operation and unrolls the change when exiting.

Within the context, only the specs that are passed to it on initialization will be recorded for the report. Data from other specs will be discarded.

Examples

# The file 'junit.xml' is written when exiting
# the context
specs = [Spec('hdf5').concretized()]
with collect_info(specs, 'junit', 'junit.xml'):
    # A report will be generated for these specs...
    for spec in specs:
        spec.do_install()
    # ...but not for this one
    Spec('zlib').concretized().do_install()
Parameters:
  • format_name (str or None) – one of the supported formats
  • args (dict) – args passed to spack install
Raises:

ValueError – when format_name is not in valid_formats

concretization_report(msg)

spack.reporter module

class spack.reporter.Reporter(args)

Bases: object

Base class for report writers.

build_report(filename, report_data)
concretization_report(filename, msg)

spack.resource module

Describes an optional resource needed for a build.

Typically a bunch of sources that can be built in-tree within another package to enable optional features.

class spack.resource.Resource(name, fetcher, destination, placement)

Bases: object

Represents an optional resource to be fetched by a package.

Aggregates a name, a fetcher, a destination and a placement.

spack.s3_handler module

class spack.s3_handler.UrllibS3Handler(debuglevel=0, context=None, check_hostname=None)

Bases: urllib.request.HTTPSHandler

s3_open(req)
class spack.s3_handler.WrapStream(raw)

Bases: _io.BufferedReader

detach()

Disconnect this buffer from its underlying raw stream and return it.

After the raw stream has been detached, the buffer is in an unusable state.

read(*args, **kwargs)

Read and return up to n bytes.

If the argument is omitted, None, or negative, reads and returns all data until EOF.

If the argument is positive, and the underlying raw stream is not ‘interactive’, multiple raw reads may be issued to satisfy the byte count (unless EOF is reached first). But for interactive raw streams (as well as sockets and pipes), at most one raw read will be issued, and a short result does not imply that EOF is imminent.

Returns an empty bytes object on EOF.

Returns None if the underlying raw stream was open in non-blocking mode and no data is available at the moment.

spack.spec module

Spack allows very fine-grained control over how packages are installed and over how they are built and configured. To make this easy, it has its own syntax for declaring a dependence. We call a descriptor of a particular package configuration a “spec”.

The syntax looks like this:

$ spack install mpileaks ^openmpi @1.2:1.4 +debug %intel @12.1 =bgqos_0
                0        1        2        3      4      5     6

The first part of this is the command, ‘spack install’. The rest of the line is a spec for a particular installation of the mpileaks package.

  1. The package to install

  2. A dependency of the package, prefixed by ^

  3. A version descriptor for the package. This can either be a specific version, like “1.2”, or it can be a range of versions, e.g. “1.2:1.4”. If multiple specific versions or multiple ranges are acceptable, they can be separated by commas, e.g. if a package will only build with versions 1.0, 1.2-1.4, and 1.6-1.8 of mavpich, you could say:

    depends_on(“mvapich@1.0,1.2:1.4,1.6:1.8”)

  4. A compile-time variant of the package. If you need openmpi to be built in debug mode for your package to work, you can require it by adding +debug to the openmpi spec when you depend on it. If you do NOT want the debug option to be enabled, then replace this with -debug.

  5. The name of the compiler to build with.

  6. The versions of the compiler to build with. Note that the identifier for a compiler version is the same ‘@’ that is used for a package version. A version list denoted by ‘@’ is associated with the compiler only if if it comes immediately after the compiler name. Otherwise it will be associated with the current package spec.

  7. The architecture to build with. This is needed on machines where cross-compilation is required

Here is the EBNF grammar for a spec:

spec-list    = { spec [ dep-list ] }
dep_list     = { ^ spec }
spec         = id [ options ]
options      = { @version-list | +variant | -variant | ~variant |
                 %compiler | arch=architecture | [ flag ]=value}
flag         = { cflags | cxxflags | fcflags | fflags | cppflags |
                 ldflags | ldlibs }
variant      = id
architecture = id
compiler     = id [ version-list ]
version-list = version [ { , version } ]
version      = id | id: | :id | id:id
id           = [A-Za-z0-9_][A-Za-z0-9_.-]*

Identifiers using the <name>=<value> command, such as architectures and compiler flags, require a space before the name.

There is one context-sensitive part: ids in versions may contain ‘.’, while other ids may not.

There is one ambiguity: since ‘-‘ is allowed in an id, you need to put whitespace space before -variant for it to be tokenized properly. You can either use whitespace, or you can just use ~variant since it means the same thing. Spack uses ~variant in directory names and in the canonical form of specs to avoid ambiguity. Both are provided because ~ can cause shell expansion when it is the first character in an id typed on the command line.

class spack.spec.Spec(spec_like=None, normal=False, concrete=False, external_path=None, external_module=None, full_hash=None)

Bases: object

build_hash(length=None)

Hash used to store specs in environments.

This hash includes build dependencies, and we need to preserve them to be able to rebuild an entire environment for a user.

cformat(*args, **kwargs)

Same as format, but color defaults to auto instead of False.

colorized()
common_dependencies(other)

Return names of dependencies that self an other have in common.

concrete

A spec is concrete if it describes a single build of a package.

More formally, a spec is concrete if concretize() has been called on it and it has been marked _concrete.

Concrete specs either can be or have been built. All constraints have been resolved, optional dependencies have been added or removed, a compiler has been chosen, and all variants have values.

concretize(tests=False)

A spec is concrete if it describes one build of a package uniquely. This will ensure that this spec is concrete.

Parameters:tests (list or bool) – list of packages that will need test dependencies, or True/False for test all/none

If this spec could describe more than one version, variant, or build of a package, this will add constraints to make it concrete.

Some rigorous validation and checks are also performed on the spec. Concretizing ensures that it is self-consistent and that it’s consistent with requirements of its packages. See flatten() and normalize() for more details on this.

concretized()

This is a non-destructive version of concretize(). First clones, then returns a concrete version of this package without modifying this package.

constrain(other, deps=True)

Merge the constraints of other with self.

Returns True if the spec changed as a result, False if not.

constrained(other, deps=True)

Return a constrained copy without modifying this spec.

copy(deps=True, **kwargs)

Make a copy of this spec.

Parameters:
  • deps (bool or tuple) – Defaults to True. If boolean, controls whether dependencies are copied (copied if True). If a tuple is provided, only dependencies of types matching those in the tuple are copied.
  • kwargs – additional arguments for internal use (passed to _dup).
Returns:

A copy of this spec.

Examples

Deep copy with dependencies:

spec.copy()
spec.copy(deps=True)

Shallow copy (no dependencies):

spec.copy(deps=False)

Only build and run dependencies:

deps=('build', 'run'):
cshort_spec

Returns an auto-colorized version of self.short_spec.

dag_hash(length=None)

This is Spack’s default hash, used to identify installations.

At the moment, it excludes build dependencies to avoid rebuilding packages whenever build dependency versions change. We will revise this to include more detailed provenance when the concretizer can more aggressievly reuse installed dependencies.

dag_hash_bit_prefix(bits)

Get the first <bits> bits of the DAG hash as an integer type.

dep_difference(other)

Returns dependencies in self that are not in other.

dep_string()
dependencies(deptype='all')
dependencies_dict(deptype='all')
static dependencies_from_node_dict(node)
dependents(deptype='all')
dependents_dict(deptype='all')
eq_dag(other, deptypes=True)

True if the full dependency DAGs of specs are equal.

eq_node(other)

Equality with another spec, not including dependencies.

external
flat_dependencies(**kwargs)

Return a DependencyMap containing all of this spec’s dependencies with their constraints merged.

If copy is True, returns merged copies of its dependencies without modifying the spec it’s called on.

If copy is False, clears this spec’s dependencies and returns them. This disconnects all dependency links including transitive dependencies, except for concrete specs: if a spec is concrete it will not be disconnected from its dependencies (although a non-concrete spec with concrete dependencies will be disconnected from those dependencies).

format(format_string='{name}{@version}{%compiler.name}{@compiler.version}{compiler_flags}{variants}{arch=architecture}', **kwargs)

Prints out particular pieces of a spec, depending on what is in the format string.

Using the {attribute} syntax, any field of the spec can be selected. Those attributes can be recursive. For example, s.format({compiler.version}) will print the version of the compiler.

Commonly used attributes of the Spec for format strings include:

name
version
compiler
compiler.name
compiler.version
compiler_flags
variants
architecture
architecture.platform
architecture.os
architecture.target
prefix

Some additional special-case properties can be added:

hash[:len]    The DAG hash with optional length argument
spack_root    The spack root directory
spack_install The spack install directory

The ^ sigil can be used to access dependencies by name. s.format({^mpi.name}) will print the name of the MPI implementation in the spec.

The @, %, arch=, and / sigils can be used to include the sigil with the printed string. These sigils may only be used with the appropriate attributes, listed below:

@        ``{@version}``, ``{@compiler.version}``
%        ``{%compiler}``, ``{%compiler.name}``
arch=    ``{arch=architecture}``
/        ``{/hash}``, ``{/hash:7}``, etc

The @ sigil may also be used for any other property named version. Sigils printed with the attribute string are only printed if the attribute string is non-empty, and are colored according to the color of the attribute.

Sigils are not used for printing variants. Variants listed by name naturally print with their sigil. For example, spec.format('{variants.debug}') would print either +debug or ~debug depending on the name of the variant. Non-boolean variants print as name=value. To print variant names or values independently, use spec.format('{variants.<name>.name}') or spec.format('{variants.<name>.value}').

Spec format strings use \ as the escape character. Use \{ and \} for literal braces, and \\ for the literal \ character. Also use \$ for the literal $ to differentiate from previous, deprecated format string syntax.

The previous format strings are deprecated. They can still be accessed by the old_format method. The format method will call old_format if the character $ appears unescaped in the format string.

Parameters:

format_string (str) – string containing the format to be expanded

Keyword Arguments:
 
  • color (bool) – True if returned string is colored
  • transform (dict) – maps full-string formats to a callable that accepts a string and returns another one
static from_dict(data)

Construct a spec from YAML.

Parameters: data – a nested dict/list data structure read from YAML or JSON.

static from_json(stream)

Construct a spec from JSON.

Parameters: stream – string or file object to read from.

static from_literal(spec_dict, normal=True)

Builds a Spec from a dictionary containing the spec literal.

The dictionary must have a single top level key, representing the root, and as many secondary level keys as needed in the spec.

The keys can be either a string or a Spec or a tuple containing the Spec and the dependency types.

Parameters:
  • spec_dict (dict) – the dictionary containing the spec literal
  • normal (bool) – if True the same key appearing at different levels of the spec_dict will map to the same object in memory.

Examples

A simple spec foo with no dependencies:

{'foo': None}

A spec foo with a (build, link) dependency bar:

{'foo':
    {'bar:build,link': None}}

A spec with a diamond dependency and various build types:

{'dt-diamond': {
    'dt-diamond-left:build,link': {
        'dt-diamond-bottom:build': None
    },
    'dt-diamond-right:build,link': {
        'dt-diamond-bottom:build,link,run': None
    }
}}

The same spec with a double copy of dt-diamond-bottom and no diamond structure:

{'dt-diamond': {
    'dt-diamond-left:build,link': {
        'dt-diamond-bottom:build': None
    },
    'dt-diamond-right:build,link': {
        'dt-diamond-bottom:build,link,run': None
    }
}, normal=False}

Constructing a spec using a Spec object as key:

mpich = Spec('mpich')
libelf = Spec('libelf@1.8.11')
expected_normalized = Spec.from_literal({
    'mpileaks': {
        'callpath': {
            'dyninst': {
                'libdwarf': {libelf: None},
                libelf: None
            },
            mpich: None
        },
        mpich: None
    },
})
static from_node_dict(node)
static from_yaml(stream)

Construct a spec from YAML.

Parameters: stream – string or file object to read from.

full_hash(length=None)

Hash to determine when to rebuild packages in the build pipeline.

This hash includes the package hash, so that we know when package files has changed between builds. It does not currently include build dependencies, though it likely should.

TODO: investigate whether to include build deps here.

fullname
get_dependency(name)
index(deptype='all')

Return DependencyMap that points to all the dependencies in this spec.

install_status()

Helper for tree to print DB install status.

static is_virtual(name)

Test if a name is virtual without requiring a Spec.

ne_dag(other, deptypes=True)

True if the full dependency DAGs of specs are not equal.

ne_node(other)

Inequality with another spec, not including dependencies.

normalize(force=False, tests=False, user_spec_deps=None)

When specs are parsed, any dependencies specified are hanging off the root, and ONLY the ones that were explicitly provided are there. Normalization turns a partial flat spec into a DAG, where:

  1. Known dependencies of the root package are in the DAG.
  2. Each node’s dependencies dict only contains its known direct deps.
  3. There is only ONE unique spec for each package in the DAG.
    • This includes virtual packages. If there a non-virtual package that provides a virtual package that is in the spec, then we replace the virtual package with the non-virtual one.

TODO: normalize should probably implement some form of cycle detection, to ensure that the spec is actually a DAG.

normalized()

Return a normalized copy of this spec without modifying this spec.

old_format(format_string='$_$@$%@+$+$=', **kwargs)

The format strings you can provide are:

$_   Package name
$.   Full package name (with namespace)
$@   Version with '@' prefix
$%   Compiler with '%' prefix
$%@  Compiler with '%' prefix & compiler version with '@' prefix
$%+  Compiler with '%' prefix & compiler flags prefixed by name
$%@+ Compiler, compiler version, and compiler flags with same
     prefixes as above
$+   Options
$=   Architecture prefixed by 'arch='
$/   7-char prefix of DAG hash with '-' prefix
$$   $

You can also use full-string versions, which elide the prefixes:

${PACKAGE}       Package name
${FULLPACKAGE}   Full package name (with namespace)
${VERSION}       Version
${COMPILER}      Full compiler string
${COMPILERNAME}  Compiler name
${COMPILERVER}   Compiler version
${COMPILERFLAGS} Compiler flags
${OPTIONS}       Options
${ARCHITECTURE}  Architecture
${PLATFORM}      Platform
${OS}            Operating System
${TARGET}        Target
${SHA1}          Dependencies 8-char sha1 prefix
${HASH:len}      DAG hash with optional length specifier

${DEP:name:OPTION} Evaluates as OPTION would for self['name']

${SPACK_ROOT}    The spack root directory
${SPACK_INSTALL} The default spack install directory,
                 ${SPACK_PREFIX}/opt
${PREFIX}        The package prefix
${NAMESPACE}     The package namespace

Note these are case-insensitive: for example you can specify either ${PACKAGE} or ${package}.

Optionally you can provide a width, e.g. $20_ for a 20-wide name. Like printf, you can provide ‘-‘ for left justification, e.g. $-20_ for a left-justified name.

Anything else is copied verbatim into the output stream.

Parameters:

format_string (str) – string containing the format to be expanded

Keyword Arguments:
 
  • color (bool) – True if returned string is colored
  • transform (dict) – maps full-string formats to a callable that accepts a string and returns another one

Examples

The following line:

s = spec.format('$_$@$+')

translates to the name, version, and options of the package, but no dependencies, arch, or compiler.

TODO: allow, e.g., $6# to customize short hash length TODO: allow, e.g., $// for full hash.

os
package
package_class

Internal package call gets only the class object for a package. Use this to just get package metadata.

patches

Return patch objects for any patch sha256 sums on this Spec.

This is for use after concretization to iterate over any patches associated with this spec.

TODO: this only checks in the package; it doesn’t resurrect old patches from install directories, but it probably should.

platform
prefix
static read_yaml_dep_specs(dependency_dict)

Read the DependencySpec portion of a YAML-formatted Spec.

This needs to be backward-compatible with older spack spec formats so that reindex will work on old specs/databases.

root

Follow dependent links and find the root of this spec’s DAG.

Spack specs have a single root (the package being installed).

satisfies(other, deps=True, strict=False, strict_deps=False)

Determine if this spec satisfies all constraints of another.

There are two senses for satisfies:

  • loose (default): the absence of a constraint in self implies that it could be satisfied by other, so we only check that there are no conflicts with other for constraints that this spec actually has.
  • strict: strict means that we must meet all the constraints specified on other.
satisfies_dependencies(other, strict=False)

This checks constraints on common dependencies against each other.

short_spec

Returns a version of the spec with the dependencies hashed instead of completely enumerated.

sorted_deps()

Return a list of all dependencies sorted by name.

target
to_dict(hash=<spack.hash_types.SpecHashDescriptor object>)

Create a dictionary suitable for writing this spec to YAML or JSON.

This dictionaries like the one that is ultimately written to a spec.yaml file in each Spack installation directory. For example, for sqlite:

{
    'spec': [
        {
            'sqlite': {
                'version': '3.28.0',
                'arch': {
                    'platform': 'darwin',
                    'platform_os': 'mojave',
                    'target': 'x86_64',
                },
                'compiler': {
                    'name': 'apple-clang',
                    'version': '10.0.0',
                },
                'namespace': 'builtin',
                'parameters': {
                    'fts': 'true',
                    'functions': 'false',
                    'cflags': [],
                    'cppflags': [],
                    'cxxflags': [],
                    'fflags': [],
                    'ldflags': [],
                    'ldlibs': [],
                },
                'dependencies': {
                    'readline': {
                        'hash': 'zvaa4lhlhilypw5quj3akyd3apbq5gap',
                        'type': ['build', 'link'],
                    }
                },
                'hash': '722dzmgymxyxd6ovjvh4742kcetkqtfs'
            }
        },
        # ... more node dicts for readline and its dependencies ...
    ]
}

Note that this dictionary starts with the ‘spec’ key, and what follows is a list starting with the root spec, followed by its dependencies in preorder. Each node in the list also has a ‘hash’ key that contains the hash of the node without the hash field included.

In the example, the package content hash is not included in the spec, but if package_hash were true there would be an additional field on each node called package_hash.

from_dict() can be used to read back in a spec that has been converted to a dictionary, serialized, and read back in.

Parameters:
  • deptype (tuple or str) – dependency types to include when traversing the spec.
  • package_hash (bool) – whether to include package content hashes in the dictionary.
to_json(stream=None, hash=<spack.hash_types.SpecHashDescriptor object>)
to_node_dict(hash=<spack.hash_types.SpecHashDescriptor object>)

Create a dictionary representing the state of this Spec.

to_node_dict creates the content that is eventually hashed by Spack to create identifiers like the DAG hash (see dag_hash()). Example result of to_node_dict for the sqlite package:

{
    'sqlite': {
        'version': '3.28.0',
        'arch': {
            'platform': 'darwin',
            'platform_os': 'mojave',
            'target': 'x86_64',
        },
        'compiler': {
            'name': 'apple-clang',
            'version': '10.0.0',
        },
        'namespace': 'builtin',
        'parameters': {
            'fts': 'true',
            'functions': 'false',
            'cflags': [],
            'cppflags': [],
            'cxxflags': [],
            'fflags': [],
            'ldflags': [],
            'ldlibs': [],
        },
        'dependencies': {
            'readline': {
                'hash': 'zvaa4lhlhilypw5quj3akyd3apbq5gap',
                'type': ['build', 'link'],
            }
        },
    }
}

Note that the dictionary returned does not include the hash of the root of the spec, though it does include hashes for each dependency, and (optionally) the package file corresponding to each node.

See to_dict() for a “complete” spec hash, with hashes for each node and nodes for each dependency (instead of just their hashes).

Parameters:hash (SpecHashDescriptor) –
to_record_dict()

Return a “flat” dictionary with name and hash as top-level keys.

This is similar to to_node_dict(), but the name and the hash are “flattened” into the dictionary for easiler parsing by tools like jq. Instead of being keyed by name or hash, the dictionary “name” and “hash” fields, e.g.:

{
  "name": "openssl"
  "hash": "3ws7bsihwbn44ghf6ep4s6h4y2o6eznv"
  "version": "3.28.0",
  "arch": {
  ...
}

But is otherwise the same as to_node_dict().

to_yaml(stream=None, hash=<spack.hash_types.SpecHashDescriptor object>)
traverse(**kwargs)
traverse_edges(visited=None, d=0, deptype='all', dep_spec=None, **kwargs)

Generic traversal of the DAG represented by this spec. This will yield each node in the spec. Options:

order [=pre|post]

Order to traverse spec nodes. Defaults to preorder traversal. Options are:

‘pre’: Pre-order traversal; each node is yielded before its
children in the dependency DAG.
‘post’: Post-order traversal; each node is yielded after its
children in the dependency DAG.
cover [=nodes|edges|paths]

Determines how extensively to cover the dag. Possible values:

‘nodes’: Visit each node in the dag only once. Every node
yielded by this function will be unique.
‘edges’: If a node has been visited once but is reached along a
new path from the root, yield it but do not descend into it. This traverses each ‘edge’ in the DAG once.
‘paths’: Explore every unique path reachable from the root.
This descends into visited subtrees and will yield nodes twice if they’re reachable by multiple paths.
depth [=False]
Defaults to False. When True, yields not just nodes in the spec, but also their depth from the root in a (depth, node) tuple.
key [=id]
Allow a custom key function to track the identity of nodes in the traversal.
root [=True]
If False, this won’t yield the root node, just its descendents.
direction [=children|parents]
If ‘children’, does a traversal of this spec’s children. If ‘parents’, traverses upwards in the DAG towards the root.
tree(**kwargs)

Prints out this spec and its dependencies, tree-formatted with indentation.

validate_or_raise()

Checks that names and values in this spec are real. If they’re not, it will raise an appropriate exception.

version
virtual

Right now, a spec is virtual if no package exists with its name.

TODO: revisit this – might need to use a separate namespace and be more explicit about this. Possible idea: just use conventin and make virtual deps all caps, e.g., MPI vs mpi.

virtual_dependencies()

Return list of any virtual deps in this spec.

spack.spec.parse(string)

Returns a list of specs from an input string. For creating one spec, see Spec() constructor.

exception spack.spec.SpecParseError(parse_error)

Bases: spack.error.SpecError

Wrapper for ParseError for when we’re parsing specs.

exception spack.spec.DuplicateDependencyError(message, long_message=None)

Bases: spack.error.SpecError

Raised when the same dependency occurs in a spec twice.

exception spack.spec.DuplicateCompilerSpecError(message, long_message=None)

Bases: spack.error.SpecError

Raised when the same compiler occurs in a spec twice.

exception spack.spec.UnsupportedCompilerError(compiler_name)

Bases: spack.error.SpecError

Raised when the user asks for a compiler spack doesn’t know about.

exception spack.spec.DuplicateArchitectureError(message, long_message=None)

Bases: spack.error.SpecError

Raised when the same architecture occurs in a spec twice.

exception spack.spec.InconsistentSpecError(message, long_message=None)

Bases: spack.error.SpecError

Raised when two nodes in the same spec DAG have inconsistent constraints.

exception spack.spec.InvalidDependencyError(pkg, deps)

Bases: spack.error.SpecError

Raised when a dependency in a spec is not actually a dependency of the package.

exception spack.spec.NoProviderError(vpkg)

Bases: spack.error.SpecError

Raised when there is no package that provides a particular virtual dependency.

exception spack.spec.MultipleProviderError(vpkg, providers)

Bases: spack.error.SpecError

Raised when there is no package that provides a particular virtual dependency.

exception spack.spec.UnsatisfiableSpecNameError(provided, required)

Bases: spack.error.UnsatisfiableSpecError

Raised when two specs aren’t even for the same package.

exception spack.spec.UnsatisfiableVersionSpecError(provided, required)

Bases: spack.error.UnsatisfiableSpecError

Raised when a spec version conflicts with package constraints.

exception spack.spec.UnsatisfiableCompilerSpecError(provided, required)

Bases: spack.error.UnsatisfiableSpecError

Raised when a spec comiler conflicts with package constraints.

exception spack.spec.UnsatisfiableCompilerFlagSpecError(provided, required)

Bases: spack.error.UnsatisfiableSpecError

Raised when a spec variant conflicts with package constraints.

exception spack.spec.UnsatisfiableArchitectureSpecError(provided, required)

Bases: spack.error.UnsatisfiableSpecError

Raised when a spec architecture conflicts with package constraints.

exception spack.spec.UnsatisfiableProviderSpecError(provided, required)

Bases: spack.error.UnsatisfiableSpecError

Raised when a provider is supplied but constraints don’t match a vpkg requirement

exception spack.spec.UnsatisfiableDependencySpecError(provided, required)

Bases: spack.error.UnsatisfiableSpecError

Raised when some dependency of constrained specs are incompatible

exception spack.spec.AmbiguousHashError(msg, *specs)

Bases: spack.error.SpecError

exception spack.spec.InvalidHashError(spec, hash)

Bases: spack.error.SpecError

exception spack.spec.NoSuchHashError(hash)

Bases: spack.error.SpecError

exception spack.spec.RedundantSpecError(spec, addition)

Bases: spack.error.SpecError

spack.spec_list module

exception spack.spec_list.InvalidSpecConstraintError(message, long_message=None)

Bases: spack.spec_list.SpecListError

Error class for invalid spec constraints at concretize time.

class spack.spec_list.SpecList(name='specs', yaml_list=[], reference={})

Bases: object

add(spec)
extend(other, copy_reference=True)
remove(spec)
specs
specs_as_constraints
specs_as_yaml_list
update_reference(reference)
exception spack.spec_list.SpecListError(message, long_message=None)

Bases: spack.error.SpackError

Error class for all errors related to SpecList objects.

exception spack.spec_list.UndefinedReferenceError(message, long_message=None)

Bases: spack.spec_list.SpecListError

Error class for undefined references in Spack stacks.

spack.spec_list.spec_ordering_key(s)

spack.stage module

class spack.stage.DIYStage(path)

Bases: object

Simple class that allows any directory to be a spack stage. Consequently, it does not expect or require that the source path adhere to the standard directory naming convention.

cache_local()
check()
create()
destroy()
expand_archive()
expanded

Returns True since the source_path must exist.

fetch(*args, **kwargs)
managed_by_spack = False
restage()
class spack.stage.ResourceStage(url_or_fetch_strategy, root, resource, **kwargs)

Bases: spack.stage.Stage

expand_archive()

Changes to the stage directory and attempt to expand the downloaded archive. Fail if the stage is not set up or if the archive is not yet downloaded.

restage()

Removes the expanded archive path if it exists, then re-expands the archive.

exception spack.stage.RestageError(message, long_message=None)

Bases: spack.stage.StageError

“Error encountered during restaging.

class spack.stage.Stage(url_or_fetch_strategy, name=None, mirror_paths=None, keep=False, path=None, lock=True, search_fn=None)

Bases: object

Manages a temporary stage directory for building.

A Stage object is a context manager that handles a directory where some source code is downloaded and built before being installed. It handles fetching the source code, either as an archive to be expanded or by checking it out of a repository. A stage’s lifecycle looks like this:

with Stage() as stage:      # Context manager creates and destroys the
                            # stage directory
    stage.fetch()           # Fetch a source archive into the stage.
    stage.expand_archive()  # Expand the archive into source_path.
    <install>               # Build and install the archive.
                            # (handled by user of Stage)

When used as a context manager, the stage is automatically destroyed if no exception is raised by the context. If an excpetion is raised, the stage is left in the filesystem and NOT destroyed, for potential reuse later.

You can also use the stage’s create/destroy functions manually, like this:

stage = Stage()
try:
    stage.create()          # Explicitly create the stage directory.
    stage.fetch()           # Fetch a source archive into the stage.
    stage.expand_archive()  # Expand the archive into source_path.
    <install>               # Build and install the archive.
                            # (handled by user of Stage)
finally:
    stage.destroy()         # Explicitly destroy the stage directory.

There are two kinds of stages: named and unnamed. Named stages can persist between runs of spack, e.g. if you fetched a tarball but didn’t finish building it, you won’t have to fetch it again.

Unnamed stages are created using standard mkdtemp mechanisms or similar, and are intended to persist for only one run of spack.

archive_file

Path to the source archive within this stage directory.

cache_local()
cache_mirror(mirror, stats)

Perform a fetch if the resource is not already cached

Parameters:
  • mirror (MirrorCache) – the mirror to cache this Stage’s resource in
  • stats (MirrorStats) – this is updated depending on whether the caching operation succeeded or failed
check()

Check the downloaded archive against a checksum digest. No-op if this stage checks code out of a repository.

create()

Ensures the top-level (config:build_stage) directory exists.

destroy()

Removes this stage directory.

expand_archive()

Changes to the stage directory and attempt to expand the downloaded archive. Fail if the stage is not set up or if the archive is not yet downloaded.

expanded

Returns True if source path expanded; else False.

expected_archive_files

Possible archive file paths.

fetch(mirror_only=False)

Downloads an archive or checks out code from a repository.

managed_by_spack = True
restage()

Removes the expanded archive path if it exists, then re-expands the archive.

save_filename
source_path

Returns the well-known source directory path.

stage_locks = {}

Most staging is managed by Spack. DIYStage is one exception.

exception spack.stage.StageError(message, long_message=None)

Bases: spack.error.SpackError

“Superclass for all errors encountered during staging.

exception spack.stage.StagePathError(message, long_message=None)

Bases: spack.stage.StageError

“Error encountered with stage path.

exception spack.stage.VersionFetchError(message, long_message=None)

Bases: spack.stage.StageError

Raised when we can’t determine a URL to fetch a package.

spack.stage.ensure_access(file)

Ensure we can access a directory and die with an error if we can’t.

spack.stage.get_checksums_for_versions(url_dict, name, first_stage_function=None, keep_stage=False, fetch_options=None, batch=False)

Fetches and checksums archives from URLs.

This function is called by both spack checksum and spack create. The first_stage_function argument allows the caller to inspect the first downloaded archive, e.g., to determine the build system.

Parameters:
  • url_dict (dict) – A dictionary of the form: version -> URL
  • name (str) – The name of the package
  • first_stage_function (callable) – function that takes a Stage and a URL; this is run on the stage of the first URL downloaded
  • keep_stage (bool) – whether to keep staging area when command completes
  • batch (bool) – whether to ask user how many versions to fetch (false) or fetch all versions (true)
  • fetch_options (dict) – Options used for the fetcher (such as timeout or cookies)
Returns:

A multi-line string containing versions and corresponding hashes

Return type:

(str)

spack.stage.get_stage_root()
spack.stage.purge()

Remove all build directories in the top-level stage path.

spack.store module

Components that manage Spack’s installation tree.

An install tree, or “build store” consists of two parts:

  1. A package database that tracks what is installed.
  2. A directory layout that determines how the installations are laid out.

The store contains all the install prefixes for packages installed by Spack. The simplest store could just contain prefixes named by DAG hash, but we use a fancier directory layout to make browsing the store and debugging easier.

The directory layout is currently hard-coded to be a YAMLDirectoryLayout, so called because it stores build metadata within each prefix, in spec.yaml files. In future versions of Spack we may consider allowing install trees to define their own layouts with some per-tree configuration.

class spack.store.Store(root, path_scheme=None, hash_length=None)

Bases: object

A store is a path full of installed Spack packages.

Stores consist of packages installed according to a DirectoryLayout, along with an index, or _database_ of their contents. The directory layout controls what paths look like and how Spack ensures that each uniqe spec gets its own unique directory (or not, though we don’t recommend that). The database is a signle file that caches metadata for the entire Spack installation. It prevents us from having to spider the install tree to figure out what’s there.

Parameters:
  • root (str) – path to the root of the install tree
  • path_scheme (str) – expression according to guidelines in spack.util.path that describes how to construct a path to a package prefix in this store
  • hash_length (int) – length of the hashes used in the directory layout; spec hash suffixes will be truncated to this length
reindex()

Convenience function to reindex the store DB with its own layout.

spack.store.default_root = '/home/docs/checkouts/readthedocs.org/user_builds/spack/checkouts/v0.15.3/lib/spack/docs/_spack_root/opt/spack'

default installation root, relative to the Spack install path

spack.store.retrieve_upstream_dbs()
spack.store.store = <spack.store.Store object>

Singleton store instance

spack.tengine module

class spack.tengine.Context

Bases: object

Base class for context classes that are used with the template engine.

context_properties = []
to_dict()

Returns a dictionary containing all the context properties.

class spack.tengine.ContextMeta

Bases: type

Meta class for Context. It helps reducing the boilerplate in client code.

classmethod context_property(func)

Decorator that adds a function name to the list of new context properties, and then returns a property.

spack.tengine.context_property = <bound method ContextMeta.context_property of <class 'spack.tengine.ContextMeta'>>

A saner way to use the decorator

spack.tengine.make_environment(dirs=None)

Returns an configured environment for template rendering.

spack.tengine.prepend_to_line(text, token)

Prepends a token to each line in text

spack.tengine.quote(text)

Quotes each line in text

spack.url module

This module has methods for parsing names and versions of packages from URLs. The idea is to allow package creators to supply nothing more than the download location of the package, and figure out version and name information from there.

Example: when spack is given the following URL:

It can figure out that the package name is hdf, and that it is at version 4.2.12. This is useful for making the creation of packages simple: a user just supplies a URL and skeleton code is generated automatically.

Spack can also figure out that it can most likely download 4.2.6 at this URL:

This is useful if a user asks for a package at a particular version number; spack doesn’t need anyone to tell it where to get the tarball even though it’s never been told about that version before.

exception spack.url.UndetectableNameError(path)

Bases: spack.url.UrlParseError

Raised when we can’t parse a package name from a string.

exception spack.url.UndetectableVersionError(path)

Bases: spack.url.UrlParseError

Raised when we can’t parse a version from a string.

exception spack.url.UrlParseError(msg, path)

Bases: spack.error.SpackError

Raised when the URL module can’t parse something correctly.

spack.url.color_url(path, **kwargs)

Color the parts of the url according to Spack’s parsing.

Colors are:
Cyan: The version found by parse_version_offset().
Red: The name found by parse_name_offset().
Green: Instances of version string from substitute_version().
Magenta: Instances of the name (protected from substitution).
Parameters:
  • path (str) – The filename or URL for the package
  • errors (bool) – Append parse errors at end of string.
  • subs (bool) – Color substitutions as well as parsed name/version.
spack.url.cumsum(elts, init=0, fn=<function <lambda>>)

Return cumulative sum of result of fn on each element in elts.

spack.url.determine_url_file_extension(path)

This returns the type of archive a URL refers to. This is sometimes confusing because of URLs like:

  1. https://github.com/petdance/ack/tarball/1.93_02

Where the URL doesn’t actually contain the filename. We need to know what type it is so that we can appropriately name files in mirrors.

spack.url.find_all(substring, string)

Returns a list containing the indices of every occurrence of substring in string.

spack.url.find_list_urls(url)

Find good list URLs for the supplied URL.

By default, returns the dirname of the archive path.

Provides special treatment for the following websites, which have a unique list URL different from the dirname of the download URL:

GitHub https://github.com/<repo>/<name>/releases
GitLab https://gitlab.*/<repo>/<name>/tags
BitBucket https://bitbucket.org/<repo>/<name>/downloads/?tab=tags
CRAN https://*.r-project.org/src/contrib/Archive/<name>
Parameters:url (str) – The download URL for the package
Returns:One or more list URLs for the package
Return type:set
spack.url.insensitize(string)

Change upper and lowercase letters to be case insensitive in the provided string. e.g., ‘a’ becomes ‘[Aa]’, ‘B’ becomes ‘[bB]’, etc. Use for building regexes.

spack.url.parse_name(path, ver=None)

Try to determine the name of a package from its filename or URL.

Parameters:
  • path (str) – The filename or URL for the package
  • ver (str) – The version of the package
Returns:

The name of the package

Return type:

str

Raises:

UndetectableNameError – If the URL does not match any regexes

spack.url.parse_name_and_version(path)

Try to determine the name of a package and extract its version from its filename or URL.

Parameters:

path (str) – The filename or URL for the package

Returns:

The name of the package The version of the package

Return type:

tuple of (str, Version)A tuple containing

Raises:
spack.url.parse_name_offset(path, v=None)

Try to determine the name of a package from its filename or URL.

Parameters:
  • path (str) – The filename or URL for the package
  • v (str) – The version of the package
Returns:

A tuple containing:

name of the package, first index of name, length of name, the index of the matching regex the matching regex

Return type:

tuple of (str, int, int, int, str)

Raises:

UndetectableNameError – If the URL does not match any regexes

spack.url.parse_version(path)

Try to extract a version string from a filename or URL.

Parameters:path (str) – The filename or URL for the package
Returns:The version of the package
Return type:spack.version.Version
Raises:UndetectableVersionError – If the URL does not match any regexes
spack.url.parse_version_offset(path)

Try to extract a version string from a filename or URL.

Parameters:path (str) – The filename or URL for the package
Returns:
A tuple containing:
version of the package, first index of version, length of version string, the index of the matching regex the matching regex
Return type:tuple of (Version, int, int, int, str)
Raises:UndetectableVersionError – If the URL does not match any regexes
spack.url.split_url_extension(path)

Some URLs have a query string, e.g.:

  1. https://github.com/losalamos/CLAMR/blob/packages/PowerParser_v2.0.7.tgz?raw=true
  2. http://www.apache.org/dyn/closer.cgi?path=/cassandra/1.2.0/apache-cassandra-1.2.0-rc2-bin.tar.gz
  3. https://gitlab.kitware.com/vtk/vtk/repository/archive.tar.bz2?ref=v7.0.0

In (1), the query string needs to be stripped to get at the extension, but in (2) & (3), the filename is IN a single final query argument.

This strips the URL into three pieces: prefix, ext, and suffix. The suffix contains anything that was stripped off the URL to get at the file extension. In (1), it will be '?raw=true', but in (2), it will be empty. In (3) the suffix is a parameter that follows after the file extension, e.g.:

  1. ('https://github.com/losalamos/CLAMR/blob/packages/PowerParser_v2.0.7', '.tgz', '?raw=true')
  2. ('http://www.apache.org/dyn/closer.cgi?path=/cassandra/1.2.0/apache-cassandra-1.2.0-rc2-bin', '.tar.gz', None)
  3. ('https://gitlab.kitware.com/vtk/vtk/repository/archive', '.tar.bz2', '?ref=v7.0.0')
spack.url.strip_name_suffixes(path, version)

Most tarballs contain a package name followed by a version number. However, some also contain extraneous information in-between the name and version:

  • rgb-1.0.6
  • converge_install_2.3.16
  • jpegsrc.v9b

These strings are not part of the package name and should be ignored. This function strips the version number and any extraneous suffixes off and returns the remaining string. The goal is that the name is always the last thing in path:

  • rgb
  • converge
  • jpeg
Parameters:
  • path (str) – The filename or URL for the package
  • version (str) – The version detected for this URL
Returns:

The path with any extraneous suffixes removed

Return type:

str

spack.url.strip_query_and_fragment(path)
spack.url.strip_version_suffixes(path)

Some tarballs contain extraneous information after the version:

  • bowtie2-2.2.5-source
  • libevent-2.0.21-stable
  • cuda_8.0.44_linux.run

These strings are not part of the version number and should be ignored. This function strips those suffixes off and returns the remaining string. The goal is that the version is always the last thing in path:

  • bowtie2-2.2.5
  • libevent-2.0.21
  • cuda_8.0.44
Parameters:path (str) – The filename or URL for the package
Returns:The path with any extraneous suffixes removed
Return type:str
spack.url.substitute_version(path, new_version)

Given a URL or archive name, find the version in the path and substitute the new version for it. Replace all occurrences of the version if they don’t overlap with the package name.

Simple example:

substitute_version('http://www.mr511.de/software/libelf-0.8.13.tar.gz', '2.9.3')
>>> 'http://www.mr511.de/software/libelf-2.9.3.tar.gz'

Complex example:

substitute_version('https://www.hdfgroup.org/ftp/HDF/releases/HDF4.2.12/src/hdf-4.2.12.tar.gz', '2.3')
>>> 'https://www.hdfgroup.org/ftp/HDF/releases/HDF2.3/src/hdf-2.3.tar.gz'
spack.url.substitution_offsets(path)

This returns offsets for substituting versions and names in the provided path. It is a helper for substitute_version().

spack.url.wildcard_version(path)

Find the version in the supplied path, and return a regular expression that will match this path with any version in its place.

spack.user_environment module

spack.user_environment.environment_modifications_for_spec(spec, view=None)

List of environment (shell) modifications to be processed for spec.

This list is specific to the location of the spec or its projection in the view.

spack.user_environment.prefix_inspections(platform)

Get list of prefix inspections for platform

Parameters:platform (string) – the name of the platform to consider. The platform determines what environment variables Spack will use for some inspections.
Returns:
A dictionary mapping subdirectory names to lists of environment
variables to modify with that directory if it exists.
spack.user_environment.spack_loaded_hashes_var = 'SPACK_LOADED_HASHES'

Environment variable name Spack uses to track individually loaded packages

spack.user_environment.unconditional_environment_modifications(view)

List of environment (shell) modifications to be processed for view.

This list does not depend on the specs in this environment

spack.variant module

The variant module contains data structures that are needed to manage variants both in packages and in specs.

class spack.variant.AbstractVariant(name, value)

Bases: object

A variant that has not yet decided who it wants to be. It behaves like a multi valued variant which could do things.

This kind of variant is generated during parsing of expressions like foo=bar and differs from multi valued variants because it will satisfy any other variant with the same name. This is because it could do it if it grows up to be a multi valued variant with the right set of values.

compatible(other)

Returns True if self and other are compatible, False otherwise.

As there is no semantic check, two VariantSpec are compatible if either they contain the same value or they are both multi-valued.

Parameters:other – instance against which we test compatibility
Returns:True or False
Return type:bool
constrain(other)

Modify self to match all the constraints for other if both instances are multi-valued. Returns True if self changed, False otherwise.

Parameters:other – instance against which we constrain self
Returns:True or False
Return type:bool
copy()

Returns an instance of a variant equivalent to self

Returns:a copy of self
Return type:any variant type
>>> a = MultiValuedVariant('foo', True)
>>> b = a.copy()
>>> assert a == b
>>> assert a is not b
static from_node_dict(name, value)

Reconstruct a variant from a node dict.

satisfies(other)

Returns true if other.name == self.name, because any value that other holds and is not in self yet could be added.

Parameters:other – constraint to be met for the method to return True
Returns:True or False
Return type:bool
value

Returns a tuple of strings containing the values stored in the variant.

Returns:values stored in the variant
Return type:tuple of str
yaml_entry()

Returns a key, value tuple suitable to be an entry in a yaml dict.

Returns:(name, value_representation)
Return type:tuple
class spack.variant.BoolValuedVariant(name, value)

Bases: spack.variant.SingleValuedVariant

A variant that can hold either True or False.

class spack.variant.DisjointSetsOfValues(*sets)

Bases: collections.abc.Sequence

Allows combinations from one of many mutually exclusive sets.

The value ('none',) is reserved to denote the empty set and therefore no other set can contain the item 'none'.

Parameters:*sets (list of tuples) – mutually exclusive sets of values
allow_empty_set()

Adds the empty set to the current list of disjoint sets.

feature_values = None

Attribute used to track values which correspond to features which can be enabled or disabled as understood by the package’s build system.

prohibit_empty_set()

Removes the empty set from the current list of disjoint sets.

validator
with_default(default)

Sets the default value and returns self.

with_error(error_fmt)

Sets the error message format and returns self.

with_non_feature_values(*values)

Marks a few values as not being tied to a feature.

exception spack.variant.DuplicateVariantError(message, long_message=None)

Bases: spack.error.SpecError

Raised when the same variant occurs in a spec twice.

exception spack.variant.InconsistentValidationError(vspec, variant)

Bases: spack.error.SpecError

Raised if the wrong validator is used to validate a variant.

exception spack.variant.InvalidVariantValueError(variant, invalid_values, pkg)

Bases: spack.error.SpecError

Raised when a valid variant has at least an invalid value.

class spack.variant.MultiValuedVariant(name, value)

Bases: spack.variant.AbstractVariant

A variant that can hold multiple values at once.

satisfies(other)

Returns true if other.name == self.name and other.value is a strict subset of self. Does not try to validate.

Parameters:other – constraint to be met for the method to return True
Returns:True or False
Return type:bool
exception spack.variant.MultipleValuesInExclusiveVariantError(variant, pkg)

Bases: spack.error.SpecError, ValueError

Raised when multiple values are present in a variant that wants only one.

class spack.variant.SingleValuedVariant(name, value)

Bases: spack.variant.MultiValuedVariant

A variant that can hold multiple values, but one at a time.

compatible(other)

Returns True if self and other are compatible, False otherwise.

As there is no semantic check, two VariantSpec are compatible if either they contain the same value or they are both multi-valued.

Parameters:other – instance against which we test compatibility
Returns:True or False
Return type:bool
constrain(other)

Modify self to match all the constraints for other if both instances are multi-valued. Returns True if self changed, False otherwise.

Parameters:other – instance against which we constrain self
Returns:True or False
Return type:bool
satisfies(other)

Returns true if other.name == self.name and other.value is a strict subset of self. Does not try to validate.

Parameters:other – constraint to be met for the method to return True
Returns:True or False
Return type:bool
yaml_entry()

Returns a key, value tuple suitable to be an entry in a yaml dict.

Returns:(name, value_representation)
Return type:tuple
exception spack.variant.UnknownVariantError(spec, variants)

Bases: spack.error.SpecError

Raised when an unknown variant occurs in a spec.

exception spack.variant.UnsatisfiableVariantSpecError(provided, required)

Bases: spack.error.UnsatisfiableSpecError

Raised when a spec variant conflicts with package constraints.

class spack.variant.Variant(name, default, description, values=(True, False), multi=False, validator=None)

Bases: object

Represents a variant in a package, as declared in the variant directive.

allowed_values

Returns a string representation of the allowed values for printing purposes

Returns:representation of the allowed values
Return type:str
make_default()

Factory that creates a variant holding the default value.

Returns:instance of the proper variant
Return type:MultiValuedVariant or SingleValuedVariant or BoolValuedVariant
make_variant(value)

Factory that creates a variant holding the value passed as a parameter.

Parameters:value – value that will be hold by the variant
Returns:instance of the proper variant
Return type:MultiValuedVariant or SingleValuedVariant or BoolValuedVariant
validate_or_raise(vspec, pkg=None)

Validate a variant spec against this package variant. Raises an exception if any error is found.

Parameters:
  • vspec (VariantSpec) – instance to be validated
  • pkg (Package) – the package that required the validation, if available
Raises:
variant_cls

Proper variant class to be used for this configuration.

class spack.variant.VariantMap(spec)

Bases: llnl.util.lang.HashableMap

Map containing variant instances. New values can be added only if the key is not already present.

concrete

Returns True if the spec is concrete in terms of variants.

Returns:True or False
Return type:bool
constrain(other)

Add all variants in other that aren’t in self to self. Also constrain all multi-valued variants that are already present. Return True if self changed, False otherwise

Parameters:other (VariantMap) – instance against which we constrain self
Returns:True or False
Return type:bool
copy()

Return an instance of VariantMap equivalent to self.

Returns:a copy of self
Return type:VariantMap
satisfies(other, strict=False)

Returns True if this VariantMap is more constrained than other, False otherwise.

Parameters:
  • other (VariantMap) – VariantMap instance to satisfy
  • strict (bool) – if True return False if a key is in other and not in self, otherwise discard that key and proceed with evaluation
Returns:

True or False

Return type:

bool

substitute(vspec)

Substitutes the entry under vspec.name with vspec.

Parameters:vspec – variant spec to be substituted
spack.variant.any_combination_of(*values)

Multi-valued variant that allows any combination of the specified values, and also allows the user to specify ‘none’ (as a string) to choose none of them.

It is up to the package implementation to handle the value ‘none’ specially, if at all.

Parameters:*values – allowed variant values
Returns:a properly initialized instance of DisjointSetsOfValues
spack.variant.auto_or_any_combination_of(*values)

Multi-valued variant that allows any combination of a set of values (but not the empty set) or ‘auto’.

Parameters:*values – allowed variant values
Returns:a properly initialized instance of DisjointSetsOfValues
spack.variant.disjoint_sets(*sets)

Multi-valued variant that allows any combination picking from one of multiple disjoint sets of values, and also allows the user to specify ‘none’ (as a string) to choose none of them.

It is up to the package implementation to handle the value ‘none’ specially, if at all.

Parameters:*sets
Returns:a properly initialized instance of DisjointSetsOfValues
spack.variant.implicit_variant_conversion(method)

Converts other to type(self) and calls method(self, other)

Parameters:method – any predicate method that takes another variant as an argument

Returns: decorated method

spack.variant.substitute_abstract_variants(spec)

Uses the information in spec.package to turn any variant that needs it into a SingleValuedVariant.

This method is best effort. All variants that can be substituted will be substituted before any error is raised.

Parameters:spec – spec on which to operate the substitution

spack.verify module

class spack.verify.VerificationResults

Bases: object

add_error(path, field)
has_errors()
json_string()
spack.verify.check_entry(path, data)
spack.verify.check_file_manifest(file)
spack.verify.check_spec_manifest(spec)
spack.verify.compute_hash(path)
spack.verify.create_manifest_entry(path)
spack.verify.write_manifest(spec)

spack.version module

This module implements Version and version-ish objects. These are:

Version
A single version of a package.
VersionRange
A range of versions of a package.
VersionList
A list of Versions and VersionRanges.

All of these types support the following operations, which can be called on any of the types:

__eq__, __ne__, __lt__, __gt__, __ge__, __le__, __hash__
__contains__
satisfies
overlaps
union
intersection
concrete
class spack.version.Version(string)

Bases: object

Class to represent versions

concrete
dashed

The dashed representation of the version.

Example: >>> version = Version(‘1.2.3b’) >>> version.dashed Version(‘1-2-3b’)

Returns:The version with separator characters replaced by dashes
Return type:Version
dotted

The dotted representation of the version.

Example: >>> version = Version(‘1-2-3b’) >>> version.dotted Version(‘1.2.3b’)

Returns:The version with separator characters replaced by dots
Return type:Version
highest()
intersection(other)
is_predecessor(other)

True if the other version is the immediate predecessor of this one. That is, NO versions v exist such that: (self < v < other and v not in self).

is_successor(other)
isdevelop()

Triggers on the special case of the @develop-like version.

joined

The joined representation of the version.

Example: >>> version = Version(‘1.2.3b’) >>> version.joined Version(‘123b’)

Returns:The version with separator characters removed
Return type:Version
lowest()
overlaps(other)
satisfies(other)

A Version ‘satisfies’ another if it is at least as specific and has a common prefix. e.g., we want gcc@4.7.3 to satisfy a request for gcc@4.7 so that when a user asks to build with gcc@4.7, we can find a suitable compiler.

underscored

The underscored representation of the version.

Example: >>> version = Version(‘1.2.3b’) >>> version.underscored Version(‘1_2_3b’)

Returns:
The version with separator characters replaced by
underscores
Return type:Version
union(other)
up_to(index)

The version up to the specified component.

Examples: >>> version = Version(‘1.23-4b’) >>> version.up_to(1) Version(‘1’) >>> version.up_to(2) Version(‘1.23’) >>> version.up_to(3) Version(‘1.23-4’) >>> version.up_to(4) Version(‘1.23-4b’) >>> version.up_to(-1) Version(‘1.23-4’) >>> version.up_to(-2) Version(‘1.23’) >>> version.up_to(-3) Version(‘1’)

Returns:The first index components of the version
Return type:Version
class spack.version.VersionRange(start, end)

Bases: object

concrete
highest()
intersection(other)
lowest()
overlaps(other)
satisfies(other)

A VersionRange satisfies another if some version in this range would satisfy some version in the other range. To do this it must either:

  1. Overlap with the other range
  2. The start of this range satisfies the end of the other range.

This is essentially the same as overlaps(), but overlaps assumes that its arguments are specific. That is, 4.7 is interpreted as 4.7.0.0.0.0… . This function assumes that 4.7 would be satisfied by 4.7.3.5, etc.

Rationale:

If a user asks for gcc@4.5:4.7, and a package is only compatible with gcc@4.7.3:4.8, then that package should be able to build under the constraints. Just using overlaps() would not work here.

Note that we don’t need to check whether the end of this range would satisfy the start of the other range, because overlaps() already covers that case.

Note further that overlaps() is a symmetric operation, while satisfies() is not.

union(other)
class spack.version.VersionList(vlist=None)

Bases: object

Sorted, non-redundant list of Versions and VersionRanges.

add(version)
concrete
copy()
static from_dict(dictionary)

Parse dict from to_dict.

highest()

Get the highest version in the list.

highest_numeric()

Get the highest numeric version in the list.

intersect(other)

Intersect this spec’s list with other.

Return True if the spec changed as a result; False otherwise

intersection(other)
lowest()

Get the lowest version in the list.

overlaps(other)
preferred()

Get the preferred (latest) version in the list.

satisfies(other, strict=False)

A VersionList satisfies another if some version in the list would satisfy some version in the other list. This uses essentially the same algorithm as overlaps() does for VersionList, but it calls satisfies() on member Versions and VersionRanges.

If strict is specified, this version list must lie entirely within the other in order to satisfy it.

to_dict()

Generate human-readable dict for YAML.

union(other)
update(other)
spack.version.ver(obj)

Parses a Version, VersionRange, or VersionList from a string or list of strings.

Module contents

spack.spack_version_info = (0, 15, 3)

major, minor, patch version for Spack, in a tuple

spack.spack_version = '0.15.3'

String containing Spack version joined with .’s