spack package

Subpackages

Submodules

spack.abi module

class spack.abi.ABI[source]

Bases: object

This class provides methods to test ABI compatibility between specs. The current implementation is rather rough and could be improved.

architecture_compatible(target, constraint)[source]

Return true if architecture of target spec is ABI compatible to the architecture of constraint spec. If either the target or constraint specs have no architecture, target is also defined as architecture ABI compatible to constraint.

compatible(target, constraint, **kwargs)[source]

Returns true if target spec is ABI compatible to constraint spec

compiler_compatible(parent, child, **kwargs)[source]

Return true if compilers for parent and child are ABI compatible.

spack.audit module

Classes and functions to register audit checks for various parts of Spack and run them on-demand.

To register a new class of sanity checks (e.g. sanity checks for compilers.yaml), the first action required is to create a new AuditClass object:

audit_cfgcmp = AuditClass(
    tag='CFG-COMPILER',
    description='Sanity checks on compilers.yaml',
    kwargs=()
)

This object is to be used as a decorator to register functions that will perform each a single check:

@audit_cfgcmp
def _search_duplicate_compilers(error_cls):
    pass

These functions need to take as argument the keywords declared when creating the decorator object plus an error_cls argument at the end, acting as a factory to create Error objects. It should return a (possibly empty) list of errors.

Calls to each of these functions are triggered by the run method of the decorator object, that will forward the keyword arguments passed as input.

class spack.audit.AuditClass(group, tag, description, kwargs)[source]

Bases: Sequence

run(**kwargs)[source]
spack.audit.CALLBACKS = {'CFG-COMPILER': <spack.audit.AuditClass object>, 'CFG-PACKAGES': <spack.audit.AuditClass object>, 'GENERIC': <spack.audit.AuditClass object>, 'PKG-ATTRIBUTES': <spack.audit.AuditClass object>, 'PKG-DIRECTIVES': <spack.audit.AuditClass object>, 'PKG-HTTPS-DIRECTIVES': <spack.audit.AuditClass object>, 'PKG-PROPERTIES': <spack.audit.AuditClass object>}

Map an audit tag to a list of callables implementing checks

class spack.audit.Error(summary, details)[source]

Bases: object

Information on an error reported in a test.

spack.audit.GROUPS = {'configs': ['CFG-COMPILER', 'CFG-PACKAGES'], 'generic': ['GENERIC'], 'packages': ['PKG-DIRECTIVES', 'PKG-ATTRIBUTES', 'PKG-PROPERTIES'], 'packages-https': ['PKG-HTTPS-DIRECTIVES']}

Map a group of checks to the list of related audit tags

spack.audit.config_compiler = <spack.audit.AuditClass object>

Sanity checks on compilers.yaml

spack.audit.config_packages = <spack.audit.AuditClass object>

Sanity checks on packages.yaml

spack.audit.generic = <spack.audit.AuditClass object>

Generic checks relying on global state

spack.audit.package_directives = <spack.audit.AuditClass object>

Sanity checks on package directives

spack.audit.run_check(tag, **kwargs)[source]

Run the checks associated with a single tag.

Parameters
  • tag (str) – tag of the check

  • **kwargs – keyword arguments forwarded to the checks

Returns

Errors occurred during the checks

spack.audit.run_group(group, **kwargs)[source]

Run the checks that are part of the group passed as argument.

Parameters
  • group (str) – group of checks to be run

  • **kwargs – keyword arguments forwarded to the checks

Returns

List of (tag, errors) that failed.

spack.binary_distribution module

class spack.binary_distribution.BinaryCacheIndex(cache_root)[source]

Bases: object

The BinaryCacheIndex tracks what specs are available on (usually remote) binary caches.

This index is “best effort”, in the sense that whenever we don’t find what we’re looking for here, we will attempt to fetch it directly from configured mirrors anyway. Thus, it has the potential to speed things up, but cache misses shouldn’t break any spack functionality.

At the moment, everything in this class is initialized as lazily as possible, so that it avoids slowing anything in spack down until absolutely necessary.

TODO: What’s the cost if, e.g., we realize in the middle of a spack install that the cache is out of date, and we fetch directly? Does it mean we should have paid the price to update the cache earlier?

clear()[source]

For testing purposes we need to be able to empty the cache and clear associated data structures.

find_built_spec(spec, mirrors_to_check=None)[source]

Look in our cache for the built spec corresponding to spec.

If the spec can be found among the configured binary mirrors, a list is returned that contains the concrete spec and the mirror url of each mirror where it can be found. Otherwise, None is returned.

This method does not trigger reading anything from remote mirrors, but rather just checks if the concrete spec is found within the cache.

The cache can be updated by calling update() on the cache.

Parameters
  • spec (spack.spec.Spec) – Concrete spec to find

  • mirrors_to_check – Optional mapping containing mirrors to check. If None, just assumes all configured mirrors.

Returns

An list of objects containing the found specs and mirror url where

each can be found, e.g.:

[
    {
        "spec": <concrete-spec>,
        "mirror_url": <mirror-root-url>
    }
]

find_by_hash(find_hash, mirrors_to_check=None)[source]

Same as find_built_spec but uses the hash of a spec.

Parameters
  • find_hash (str) – hash of the spec to search

  • mirrors_to_check – Optional mapping containing mirrors to check. If None, just assumes all configured mirrors.

get_all_built_specs()[source]
regenerate_spec_cache(clear_existing=False)[source]

Populate the local cache of concrete specs (_mirrors_for_spec) from the locally cached buildcache index files. This is essentially a no-op if it has already been done, as we keep track of the index hashes for which we have already associated the built specs.

update(with_cooldown=False)[source]

Make sure local cache of buildcache index files is up to date. If the same mirrors are configured as the last time this was called and none of the remote buildcache indices have changed, calling this method will only result in fetching the index hash from each mirror to confirm it is the same as what is stored locally. Otherwise, the buildcache index.json and index.json.hash files are retrieved from each configured mirror and stored locally (both in memory and on disk under _index_cache_root).

update_spec(spec, found_list)[source]

Take list of {‘mirror_url’: m, ‘spec’: s} objects and update the local built_spec_cache

class spack.binary_distribution.BinaryCacheQuery(all_architectures)[source]

Bases: object

Callable object to query if a spec is in a binary cache

class spack.binary_distribution.BuildManifestVisitor[source]

Bases: BaseDirectoryVisitor

Visitor that collects a list of files and symlinks that can be checked for need of relocation. It knows how to dedupe hardlinks and deal with symlinks to files and directories.

before_visit_dir(root, rel_path, depth)[source]

Return True from this function to recurse into the directory at os.path.join(root, rel_path). Return False in order not to recurse further.

Parameters
  • root (str) – root directory

  • rel_path (str) – relative path to current directory from root

  • depth (int) – depth of current directory from the root directory

Returns

True when the directory should be recursed into. False when not

Return type

bool

before_visit_symlinked_dir(root, rel_path, depth)[source]

Return True to recurse into the symlinked directory and False in order not to. Note: rel_path is the path to the symlink itself. Following symlinked directories blindly can cause infinite recursion due to cycles.

Parameters
  • root (str) – root directory

  • rel_path (str) – relative path to current symlink from root

  • depth (int) – depth of current symlink from the root directory

Returns

True when the directory should be recursed into. False when not

Return type

bool

seen_before(root, rel_path)[source]
visit_file(root, rel_path, depth)[source]

Handle the non-symlink file at os.path.join(root, rel_path)

Parameters
  • root (str) – root directory

  • rel_path (str) – relative path to current file from root

  • depth (int) – depth of current file from the root directory

visit_symlinked_file(root, rel_path, depth)[source]

Handle the symlink to a file at os.path.join(root, rel_path). Note: rel_path is the location of the symlink, not to what it is pointing to. The symlink may be dangling.

Parameters
  • root (str) – root directory

  • rel_path (str) – relative path to current symlink from root

  • depth (int) – depth of current symlink from the root directory

exception spack.binary_distribution.FetchCacheError(errors)[source]

Bases: Exception

Error thrown when fetching the cache failed, usually a composite error list.

exception spack.binary_distribution.ListMirrorSpecsError(message, long_message=None)[source]

Bases: SpackError

Raised when unable to retrieve list of specs from the mirror

exception spack.binary_distribution.NewLayoutException(msg)[source]

Bases: SpackError

Raised if directory layout is different from buildcache.

exception spack.binary_distribution.NoChecksumException(message, long_message=None)[source]

Bases: SpackError

Raised if file fails checksum verification.

exception spack.binary_distribution.NoGpgException(msg)[source]

Bases: SpackError

Raised when gpg2 is not in PATH

exception spack.binary_distribution.NoKeyException(msg)[source]

Bases: SpackError

Raised when gpg has no default key added.

exception spack.binary_distribution.NoOverwriteException(file_path)[source]

Bases: SpackError

Raised when a file exists and must be overwritten.

exception spack.binary_distribution.NoVerifyException(message, long_message=None)[source]

Bases: SpackError

Raised if file fails signature verification.

exception spack.binary_distribution.PickKeyException(keys)[source]

Bases: SpackError

Raised when multiple keys can be used to sign.

exception spack.binary_distribution.UnsignedPackageException(message, long_message=None)[source]

Bases: SpackError

Raised if installation of unsigned package is attempted without the use of --no-check-signature.

spack.binary_distribution.binary_index = <spack.binary_distribution.BinaryCacheIndex object>

Singleton binary_index instance

spack.binary_distribution.binary_index_location()[source]

Set up a BinaryCacheIndex for remote buildcache dbs in the user’s homedir.

spack.binary_distribution.build_cache_keys_relative_path()[source]
spack.binary_distribution.build_cache_prefix(prefix)[source]
spack.binary_distribution.build_cache_relative_path()[source]
spack.binary_distribution.buildinfo_file_name(prefix)[source]

Filename of the binary package meta-data file

spack.binary_distribution.check_package_relocatable(workdir, spec, allow_root)[source]

Check if package binaries are relocatable. Change links to placeholder links.

spack.binary_distribution.check_specs_against_mirrors(mirrors, specs, output_file=None)[source]

Check all the given specs against buildcaches on the given mirrors and determine if any of the specs need to be rebuilt. Specs need to be rebuilt when their hash doesn’t exist in the mirror.

Parameters
  • mirrors (dict) – Mirrors to check against

  • specs (Iterable) – Specs to check against mirrors

  • output_file (str) – Path to output file to be written. If provided, mirrors with missing or out-of-date specs will be formatted as a JSON object and written to this file.

Returns: 1 if any spec was out-of-date on any mirror, 0 otherwise.

spack.binary_distribution.checksum_tarball(file)[source]
spack.binary_distribution.clear_spec_cache()[source]
spack.binary_distribution.compute_hash(data)[source]

Updates a buildinfo dict for old archives that did not dedupe hardlinks. De-duping hardlinks is necessary when relocating files in parallel and in-place. This means we must preserve inodes when relocating.

spack.binary_distribution.download_buildcache_entry(file_descriptions, mirror_url=None)[source]
spack.binary_distribution.download_single_spec(concrete_spec, destination, mirror_url=None)[source]

Download the buildcache files for a single concrete spec.

Parameters
  • concrete_spec – concrete spec to be downloaded

  • destination (str) – path where to put the downloaded buildcache

  • mirror_url (str) – url of the mirror from which to download

spack.binary_distribution.download_tarball(spec, unsigned=False, mirrors_for_spec=None)[source]

Download binary tarball for given package into stage area, returning path to downloaded tarball if successful, None otherwise.

Parameters
  • spec (spack.spec.Spec) – Concrete spec

  • unsigned (bool) – Whether or not to require signed binaries

  • mirrors_for_spec (list) – Optional list of concrete specs and mirrors obtained by calling binary_distribution.get_mirrors_for_spec(). These will be checked in order first before looking in other configured mirrors.

Returns

None if the tarball could not be downloaded (maybe also verified, depending on whether new-style signed binary packages were found). Otherwise, return an object indicating the path to the downloaded tarball, the path to the downloaded specfile (in the case of new-style buildcache), and whether or not the tarball is already verified.

{
    "tarball_path": "path-to-locally-saved-tarfile",
    "specfile_path": "none-or-path-to-locally-saved-specfile",
    "signature_verified": "true-if-binary-pkg-was-already-verified"
}
spack.binary_distribution.extract_tarball(spec, download_result, allow_root=False, unsigned=False, force=False)[source]

extract binary tarball for given package into install area

spack.binary_distribution.file_matches(path, regex)[source]
spack.binary_distribution.generate_key_index(key_prefix, tmpdir=None)[source]

Create the key index page.

Creates (or replaces) the “index.json” page at the location given in key_prefix. This page contains an entry for each key (.pub) under key_prefix.

spack.binary_distribution.generate_package_index(cache_prefix, concurrency=32)[source]

Create or replace the build cache index on the given mirror. The buildcache index contains an entry for each binary package under the cache_prefix.

Parameters
  • cache_prefix (str) – Base url of binary mirror.

  • concurrency – (int): The desired threading concurrency to use when fetching the spec files from the mirror.

Returns

None

spack.binary_distribution.get_buildfile_manifest(spec)[source]

Return a data structure with information about a build, including text_to_relocate, binary_to_relocate, binary_to_relocate_fullpath link_to_relocate, and other, which means it doesn’t fit any of previous checks (and should not be relocated). We exclude docs (man) and metadata (.spack). This can be used to find a particular kind of file in spack, or to generate the build metadata.

spack.binary_distribution.get_keys(install=False, trust=False, force=False, mirrors=None)[source]

Get pgp public keys available on mirror with suffix .pub

spack.binary_distribution.get_mirrors_for_spec(spec=None, mirrors_to_check=None, index_only=False)[source]

Check if concrete spec exists on mirrors and return a list indicating the mirrors on which it can be found

Parameters
  • spec (spack.spec.Spec) – The spec to look for in binary mirrors

  • mirrors_to_check (dict) – Optionally override the configured mirrors with the mirrors in this dictionary.

  • index_only (bool) – Do not attempt direct fetching of spec.json files from remote mirrors, only consider the indices.

Returns

A list of objects, each containing a mirror_url and spec key

indicating all mirrors where the spec can be found.

spack.binary_distribution.install_root_node(spec, allow_root, unsigned=False, force=False, sha256=None)[source]

Install the root node of a concrete spec from a buildcache.

Checking the sha256 sum of a node before installation is usually needed only for software installed during Spack’s bootstrapping (since we might not have a proper signature verification mechanism available).

Parameters
  • spec – spec to be installed (note that only the root node will be installed)

  • allow_root (bool) – allows the root directory to be present in binaries (may affect relocation)

  • unsigned (bool) – if True allows installing unsigned binaries

  • force (bool) – force installation if the spec is already present in the local store

  • sha256 (str) – optional sha256 of the binary package, to be checked before installation

spack.binary_distribution.install_single_spec(spec, allow_root=False, unsigned=False, force=False)[source]

Install a single concrete spec from a buildcache.

Parameters
  • spec (spack.spec.Spec) – spec to be installed

  • allow_root (bool) – allows the root directory to be present in binaries (may affect relocation)

  • unsigned (bool) – if True allows installing unsigned binaries

  • force (bool) – force installation if the spec is already present in the local store

spack.binary_distribution.make_package_relative(workdir, spec, allow_root)[source]

Change paths in binaries to relative paths. Change absolute symlinks to relative symlinks.

spack.binary_distribution.needs_rebuild(spec, mirror_url)[source]
spack.binary_distribution.nodes_to_be_packaged(specs, include_root=True, include_dependencies=True)[source]

Return the list of nodes to be packaged, given a list of specs.

Parameters
  • specs (List[spack.spec.Spec]) – list of root specs to be processed

  • include_root (bool) – include the root of each spec in the nodes

  • include_dependencies (bool) – include the dependencies of each spec in the nodes

spack.binary_distribution.push(specs, push_url, specs_kwargs=None, **kwargs)[source]

Create a binary package for each of the specs passed as input and push them to a given push URL.

Parameters
  • specs (List[spack.spec.Spec]) – installed specs to be packaged

  • push_url (str) – url where to push the binary package

  • specs_kwargs (dict) – dictionary with two possible boolean keys, “include_root” and “include_dependencies”, which determine which part of each spec is packaged and pushed to the mirror

  • **kwargs – TODO

spack.binary_distribution.push_keys(*mirrors, **kwargs)[source]

Upload pgp public keys to the given mirrors

spack.binary_distribution.read_buildinfo_file(prefix)[source]

Read buildinfo file

spack.binary_distribution.relocate_package(spec, allow_root)[source]

Relocate the given package

spack.binary_distribution.select_signing_key(key=None)[source]
spack.binary_distribution.sign_specfile(key, force, specfile_path)[source]
spack.binary_distribution.tarball_directory_name(spec)[source]

Return name of the tarball directory according to the convention <os>-<architecture>/<compiler>/<package>-<version>/

spack.binary_distribution.tarball_name(spec, ext)[source]

Return the name of the tarfile according to the convention <os>-<architecture>-<package>-<dag_hash><ext>

spack.binary_distribution.tarball_path_name(spec, ext)[source]

Return the full path+name for a given spec according to the convention <tarball_directory_name>/<tarball_name>

spack.binary_distribution.try_direct_fetch(spec, mirrors=None)[source]

Try to find the spec directly on the configured mirrors

spack.binary_distribution.try_fetch(url_to_fetch)[source]

Utility function to try and fetch a file from a url, stage it locally, and return the path to the staged file.

Parameters

url_to_fetch (str) – Url pointing to remote resource to fetch

Returns

Path to locally staged resource or None if it could not be fetched.

spack.binary_distribution.try_verify(specfile_path)[source]

Utility function to attempt to verify a local file. Assumes the file is a clearsigned signature file.

Parameters

specfile_path (str) – Path to file to be verified.

Returns

True if the signature could be verified, False otherwise.

spack.binary_distribution.update_cache_and_get_specs()[source]

Get all concrete specs for build caches available on configured mirrors. Initialization of internal cache data structures is done as lazily as possible, so this method will also attempt to initialize and update the local index cache (essentially a no-op if it has been done already and nothing has changed on the configured mirrors.)

Throws:

FetchCacheError

spack.binary_distribution.write_buildinfo_file(spec, workdir, rel=False)[source]

Create a cache file containing information required for the relocation

spack.bootstrap module

spack.bootstrap.METADATA_YAML_FILENAME = 'metadata.yaml'

Name of the file containing metadata about the bootstrapping source

spack.bootstrap.all_root_specs(development=False)[source]

Return a list of all the root specs that may be used to bootstrap Spack.

Parameters

development (bool) – if True include dev dependencies

spack.bootstrap.black_root_spec()[source]
spack.bootstrap.bootstrapping_sources(scope=None)[source]

Return the list of configured sources of software for bootstrapping Spack

Parameters

scope (str or None) – if a valid configuration scope is given, return the list only from that scope

spack.bootstrap.clingo_root_spec()[source]

Return the root spec used to bootstrap clingo

spack.bootstrap.ensure_black_in_path_or_raise()[source]

Ensure that black is in the PATH or raise.

spack.bootstrap.ensure_bootstrap_configuration()[source]
spack.bootstrap.ensure_clingo_importable_or_raise()[source]

Ensure that the clingo module is available for import.

spack.bootstrap.ensure_executables_in_path_or_raise(executables, abstract_spec, cmd_check=None)[source]

Ensure that some executables are in path or raise.

Parameters
  • executables (list) – list of executables to be searched in the PATH, in order. The function exits on the first one found.

  • abstract_spec (str) – abstract spec that provides the executables

  • cmd_check (object) – callable predicate that takes a spack.util.executable.Executable command and validate it. Should return True if the executable is acceptable, False otherwise. Can be used to, e.g., ensure a suitable version of the command before accepting for bootstrapping.

Raises

RuntimeError – if the executables cannot be ensured to be in PATH

Returns

Executable object

spack.bootstrap.ensure_flake8_in_path_or_raise()[source]

Ensure that flake8 is in the PATH or raise.

spack.bootstrap.ensure_gpg_in_path_or_raise()[source]

Ensure gpg or gpg2 are in the PATH or raise.

spack.bootstrap.ensure_isort_in_path_or_raise()[source]

Ensure that isort is in the PATH or raise.

spack.bootstrap.ensure_module_importable_or_raise(module, abstract_spec=None)[source]

Make the requested module available for import, or raise.

This function tries to import a Python module in the current interpreter using, in order, the methods configured in bootstrap.yaml.

If none of the methods succeed, an exception is raised. The function exits on first success.

Parameters
  • module (str) – module to be imported in the current interpreter

  • abstract_spec (str) – abstract spec that might provide the module. If not given it defaults to “module”

Raises

ImportError – if the module couldn’t be imported

spack.bootstrap.ensure_mypy_in_path_or_raise()[source]

Ensure that mypy is in the PATH or raise.

spack.bootstrap.ensure_patchelf_in_path_or_raise()[source]

Ensure patchelf is in the PATH or raise.

spack.bootstrap.flake8_root_spec()[source]
spack.bootstrap.gnupg_root_spec()[source]

Return the root spec used to bootstrap GnuPG

spack.bootstrap.is_bootstrapping()[source]
spack.bootstrap.isort_root_spec()[source]
spack.bootstrap.mypy_root_spec()[source]
spack.bootstrap.patchelf_root_spec()[source]

Return the root spec used to bootstrap patchelf

spack.bootstrap.source_is_enabled_or_raise(conf)[source]

Raise ValueError if the source is not enabled for bootstrapping

spack.bootstrap.spack_python_interpreter()[source]

Override the current configuration to set the interpreter under which Spack is currently running as the only Python external spec available.

spack.bootstrap.spec_for_current_python()[source]

For bootstrapping purposes we are just interested in the Python minor version (all patches are ABI compatible with the same minor) and on whether ucs4 support has been enabled for Python 2.7

See:

https://www.python.org/dev/peps/pep-0513/ https://stackoverflow.com/a/35801395/771663

spack.bootstrap.status_message(section)[source]

Return a status message to be printed to screen that refers to the section passed as argument and a bool which is True if there are missing dependencies.

Parameters

section (str) – either ‘core’ or ‘buildcache’ or ‘optional’ or ‘develop’

spack.bootstrap.store_path()[source]

Path to the store used for bootstrapped software

spack.bootstrap.verify_patchelf(patchelf)[source]

Older patchelf versions can produce broken binaries, so we verify the version here.

Parameters

patchelf (spack.util.executable.Executable) – patchelf executable

spack.build_environment module

This module contains all routines related to setting up the package build environment. All of this is set up by package.py just before install() is called.

There are two parts to the build environment:

  1. Python build environment (i.e. install() method)

    This is how things are set up when install() is called. Spack takes advantage of each package being in its own module by adding a bunch of command-like functions (like configure(), make(), etc.) in the package’s module scope. Ths allows package writers to call them all directly in Package.install() without writing ‘self.’ everywhere. No, this isn’t Pythonic. Yes, it makes the code more readable and more like the shell script from which someone is likely porting.

  2. Build execution environment

    This is the set of environment variables, like PATH, CC, CXX, etc. that control the build. There are also a number of environment variables used to pass information (like RPATHs and other information about dependencies) to Spack’s compiler wrappers. All of these env vars are also set up here.

Skimming this module is a nice way to get acquainted with the types of calls you can make from within the install() function.

exception spack.build_environment.ChildError(msg, module, classname, traceback_string, log_name, log_type, context)[source]

Bases: InstallError

Special exception class for wrapping exceptions from child processes

in Spack’s build environment.

The main features of a ChildError are:

  1. They’re serializable, so when a child build fails, we can send one of these to the parent and let the parent report what happened.

  2. They have a traceback field containing a traceback generated on the child immediately after failure. Spack will print this on failure in lieu of trying to run sys.excepthook on the parent process, so users will see the correct stack trace from a child.

  3. They also contain context, which shows context in the Package implementation where the error happened. This helps people debug Python code in their packages. To get it, Spack searches the stack trace for the deepest frame where self is in scope and is an instance of PackageBase. This will generally find a useful spot in the package.py file.

The long_message of a ChildError displays one of two things:

  1. If the original error was a ProcessError, indicating a command died during the build, we’ll show context from the build log.

  2. If the original error was any other type of error, we’ll show context from the Python code.

SpackError handles displaying the special traceback if we’re in debug mode with spack -d.

build_errors = [('spack.util.executable', 'ProcessError')]
property long_message
class spack.build_environment.MakeExecutable(name, jobs, **kwargs)[source]

Bases: Executable

Special callable executable object for make so the user can specify parallelism options on a per-invocation basis. Specifying ‘parallel’ to the call will override whatever the package’s global setting is, so you can either default to true or false and override particular calls. Specifying ‘jobs_env’ to a particular call will name an environment variable which will be set to the parallelism level (without affecting the normal invocation with -j).

class spack.build_environment.ModuleChangePropagator(package)[source]

Bases: object

Wrapper class to accept changes to a package.py Python module, and propagate them in the MRO of the package.

It is mainly used as a substitute of the package.py module, when calling the “setup_dependent_package” function during build environment setup.

propagate_changes_to_mro()[source]
exception spack.build_environment.StopPhase(message, long_message=None)[source]

Bases: SpackError

Pickle-able exception to control stopped builds.

spack.build_environment.clean_environment()[source]
spack.build_environment.determine_number_of_jobs(parallel=False, command_line=None, config_default=None, max_cpus=None)[source]

Packages that require sequential builds need 1 job. Otherwise we use the number of jobs set on the command line. If not set, then we use the config defaults (which is usually set through the builtin config scope), but we cap to the number of CPUs available to avoid oversubscription.

Parameters
  • parallel (bool or None) – true when package supports parallel builds

  • command_line (int or None) – command line override

  • config_default (int or None) – config default number of jobs

  • max_cpus (int or None) – maximum number of CPUs available. When None, this value is automatically determined.

spack.build_environment.get_cmake_prefix_path(pkg)[source]
spack.build_environment.get_effective_jobs(jobs, parallel=True, supports_jobserver=False)[source]

Return the number of jobs, or None if supports_jobserver and a jobserver is detected.

spack.build_environment.get_package_context(traceback, context=3)[source]

Return some context for an error message when the build fails.

Parameters
  • traceback – A traceback from some exception raised during install

  • context (int) – Lines of context to show before and after the line where the error happened

This function inspects the stack to find where we failed in the package file, and it adds detailed context to the long_message from there.

spack.build_environment.get_rpath_deps(pkg)[source]

Return immediate or transitive RPATHs depending on the package.

spack.build_environment.get_rpaths(pkg)[source]

Get a list of all the rpaths for a package.

spack.build_environment.jobserver_enabled()[source]

Returns true if a posix jobserver (make) is detected.

spack.build_environment.load_external_modules(pkg)[source]

Traverse a package’s spec DAG and load any external modules.

Traverse a package’s dependencies and load any external modules associated with them.

Parameters

pkg (spack.package_base.PackageBase) – package to load deps for

spack.build_environment.modifications_from_dependencies(spec, context, custom_mods_only=True, set_package_py_globals=True)[source]

Returns the environment modifications that are required by the dependencies of a spec and also applies modifications to this spec’s package at module scope, if need be.

Environment modifications include:

  • Updating PATH so that executables can be found

  • Updating CMAKE_PREFIX_PATH and PKG_CONFIG_PATH so that their respective tools can find Spack-built dependencies

  • Running custom package environment modifications

Custom package modifications can conflict with the default PATH changes we make (specifically for the PATH, CMAKE_PREFIX_PATH, and PKG_CONFIG_PATH environment variables), so this applies changes in a fixed order:

  • All modifications (custom and default) from external deps first

  • All modifications from non-external deps afterwards

With that order, PrependPath actions from non-external default environment modifications will take precedence over custom modifications from external packages.

A secondary constraint is that custom and default modifications are grouped on a per-package basis: combined with the post-order traversal this means that default modifications of dependents can override custom modifications of dependencies (again, this would only occur for PATH, CMAKE_PREFIX_PATH, or PKG_CONFIG_PATH).

Parameters
  • spec (spack.spec.Spec) – spec for which we want the modifications

  • context (str) – either ‘build’ for build-time modifications or ‘run’ for run-time modifications

  • custom_mods_only (bool) – if True returns only custom modifications, if False returns custom and default modifications

  • set_package_py_globals (bool) – whether or not to set the global variables in the package.py files (this may be problematic when using buildcaches that have been built on a different but compatible OS)

spack.build_environment.parent_class_modules(cls)[source]

Get list of superclass modules that descend from spack.package_base.PackageBase

Includes cls.__module__

spack.build_environment.set_compiler_environment_variables(pkg, env)[source]
spack.build_environment.set_module_variables_for_package(pkg)[source]

Populate the module scope of install() with some useful functions. This makes things easier for package writers.

spack.build_environment.set_wrapper_variables(pkg, env)[source]

Set environment variables used by the Spack compiler wrapper (which have the prefix SPACK_) and also add the compiler wrappers to PATH.

This determines the injected -L/-I/-rpath options; each of these specifies a search order and this function computes these options in a manner that is intended to match the DAG traversal order in modifications_from_dependencies: that method uses a post-order traversal so that PrependPath actions from dependencies take lower precedence; we use a post-order traversal here to match the visitation order of modifications_from_dependencies (so we are visiting the lowest priority packages first).

spack.build_environment.setup_package(pkg, dirty, context='build')[source]

Execute all environment setup routines.

spack.build_environment.start_build_process(pkg, function, kwargs)[source]

Create a child process to do part of a spack build.

Parameters

Usage:

def child_fun():
    # do stuff
build_env.start_build_process(pkg, child_fun)

The child process is run with the build environment set up by spack.build_environment. This allows package authors to have full control over the environment, etc. without affecting other builds that might be executed in the same spack call.

If something goes wrong, the child process catches the error and passes it to the parent wrapped in a ChildError. The parent is expected to handle (or re-raise) the ChildError.

This uses multiprocessing.Process to create the child process. The mechanism used to create the process differs on different operating systems and for different versions of Python. In some cases “fork” is used (i.e. the “fork” system call) and some cases it starts an entirely new Python interpreter process (in the docs this is referred to as the “spawn” start method). Breaking it down by OS:

  • Linux always uses fork.

  • Mac OS uses fork before Python 3.8 and “spawn” for 3.8 and after.

  • Windows always uses the “spawn” start method.

For more information on multiprocessing child process creation mechanisms, see https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods

spack.build_environment.write_log_summary(out, log_type, log, last=None)[source]

spack.builder module

spack.builder.BUILDER_CLS = {'cmake': <class 'spack.build_systems.cmake.CMakeBuilder'>, 'meson': <class 'spack.build_systems.meson.MesonBuilder'>}

Builder classes, as registered by the “builder” decorator

class spack.builder.Builder(pkg)[source]

Bases: Sequence

A builder is a class that, given a package object (i.e. associated with concrete spec), knows how to install it.

The builder behaves like a sequence, and when iterated over return the “phases” of the installation in the correct order.

Parameters

pkg (spack.package_base.PackageBase) – package object to be built

archive_files = []

List of glob expressions. Each expression must either be absolute or relative to the package source path. Matching artifacts found at the end of the build process will be copied in the same directory tree as _spack_build_logfile and _spack_build_envfile.

build_system = None

Build system name. Must also be defined in derived classes.

legacy_attributes = ()
legacy_methods = ()
phases = ()

Sequence of phases. Must be defined in derived classes

property prefix
setup_build_environment(env)[source]

Sets up the build environment for a package.

This method will be called before the current package prefix exists in Spack’s store.

Parameters

env (spack.util.environment.EnvironmentModifications) – environment modifications to be applied when the package is built. Package authors can call methods on it to alter the build environment.

setup_dependent_build_environment(env, dependent_spec)[source]

Sets up the build environment of packages that depend on this one.

This is similar to setup_build_environment, but it is used to modify the build environments of packages that depend on this one.

This gives packages like Python and others that follow the extension model a way to implement common environment or compile-time settings for dependencies.

This method will be called before the dependent package prefix exists in Spack’s store.

Examples

1. Installing python modules generally requires PYTHONPATH to point to the lib/pythonX.Y/site-packages directory in the module’s install prefix. This method could be used to set that variable.

Parameters
  • env (spack.util.environment.EnvironmentModifications) – environment modifications to be applied when the dependent package is built. Package authors can call methods on it to alter the build environment.

  • dependent_spec (spack.spec.Spec) – the spec of the dependent package about to be built. This allows the extendee (self) to query the dependent’s state. Note that this package’s spec is available as self.spec

property spec
property stage
test()[source]
class spack.builder.BuilderMeta(name, bases, attr_dict)[source]

Bases: PhaseCallbacksMeta, ABCMeta

class spack.builder.CallbackTemporaryStage(attribute_name, callbacks)

Bases: tuple

An object of this kind is a shared global state used to collect callbacks during class definition time, and is flushed when the class object is created at the end of the class definition

Parameters
  • attribute_name (str) – name of the attribute that will be attached to the builder

  • callbacks (list) – container used to temporarily aggregate the callbacks

property attribute_name

Alias for field number 0

property callbacks

Alias for field number 1

class spack.builder.InstallationPhase(name, builder)[source]

Bases: object

Manages a single phase of the installation.

This descriptor stores at creation time the name of the method it should search for execution. The method is retrieved at __get__ time, so that it can be overridden by subclasses of whatever class declared the phases.

It also provides hooks to execute arbitrary callbacks before and after the phase.

copy()[source]
execute()[source]
class spack.builder.PhaseCallbacksMeta(name, bases, attr_dict)[source]

Bases: type

Permit to register arbitrary functions during class definition and run them later, before or after a given install phase.

Each method decorated with run_before or run_after gets temporarily stored in a global shared state when a class being defined is parsed by the Python interpreter. At class definition time that temporary storage gets flushed and a list of callbacks is attached to the class being defined.

static run_after(phase, when=None)[source]

Decorator to register a function for running after a given phase.

Parameters
  • phase (str) – phase after which the function must run.

  • when (str) – condition under which the function is run (if None, it is always run).

static run_before(phase, when=None)[source]

Decorator to register a function for running before a given phase.

Parameters
  • phase (str) – phase before which the function must run.

  • when (str) – condition under which the function is run (if None, it is always run).

spack.builder.builder(build_system_name)[source]

Class decorator used to register the default builder for a given build-system.

Parameters

build_system_name (str) – name of the build-system

spack.builder.buildsystem_name(pkg)[source]

Given a package object with an associated concrete spec, return the name of its build system.

Parameters

pkg (spack.package_base.PackageBase) – package for which we want the build system name

spack.builder.create(pkg)[source]

Given a package object with an associated concrete spec, return the builder object that can install it.

Parameters

pkg (spack.package_base.PackageBase) – package for which we want the builder

spack.builder.run_after(phase, when=None)

Decorator to register a function for running after a given phase.

Parameters
  • phase (str) – phase after which the function must run.

  • when (str) – condition under which the function is run (if None, it is always run).

spack.builder.run_before(phase, when=None)

Decorator to register a function for running before a given phase.

Parameters
  • phase (str) – phase before which the function must run.

  • when (str) – condition under which the function is run (if None, it is always run).

spack.caches module

Caches used by Spack to store data

class spack.caches.MirrorCache(root, skip_unstable_versions)[source]

Bases: object

store(fetcher, relative_dest)[source]

Fetch and relocate the fetcher’s target into our mirror cache.

Symlink a human readible path in our mirror to the actual storage location.

spack.caches.fetch_cache = <spack.fetch_strategy.FsCache object>

Spack’s local cache for downloaded source archives

spack.caches.fetch_cache_location()[source]

Filesystem cache of downloaded archives.

This prevents Spack from repeatedly fetch the same files when building the same package different ways or multiple times.

spack.caches.misc_cache = <spack.util.file_cache.FileCache object>

Spack’s cache for small data

spack.caches.misc_cache_location()[source]

The misc_cache is Spack’s cache for small data.

Currently the misc_cache stores indexes for virtual dependency providers and for which packages provide which tags.

spack.ci module

class spack.ci.CDashHandler(ci_cdash)[source]

Bases: object

Class for managing CDash data and processing.

args()[source]
property build_name

Returns the CDash build name.

A name will be generated if the current_spec property is set; otherwise, the value will be retrieved from the environment through the SPACK_CDASH_BUILD_NAME variable.

Returns: (str) current spec’s CDash build name.

property build_stamp

Returns the CDash build stamp.

The one defined by SPACK_CDASH_BUILD_STAMP environment variable is preferred due to the representation of timestamps; otherwise, one will be built.

Returns: (str) current CDash build stamp

copy_test_results(source, dest)[source]

Copy test results to artifacts directory.

create_buildgroup(opener, headers, url, group_name, group_type)[source]
populate_buildgroup(job_names)[source]
property project_enc
report_skipped(spec, directory_name, reason)[source]
property upload_url
class spack.ci.TemporaryDirectory[source]

Bases: object

spack.ci.can_sign_binaries()[source]

Utility method to determine if this spack instance is capable of signing binary packages. This is currently only possible if the spack gpg keystore contains exactly one secret key.

spack.ci.can_verify_binaries()[source]

Utility method to determin if this spack instance is capable (at least in theory) of verifying signed binaries.

spack.ci.compute_affected_packages(rev1='HEAD^', rev2='HEAD')[source]

Determine which packages were added, removed or changed between rev1 and rev2, and return the names as a set

spack.ci.configure_compilers(compiler_action, scope=None)[source]
Depending on the compiler_action parameter, either turn on the

install_missing_compilers config option, or find spack compilers, or do nothing. This is used from rebuild jobs in bootstrapping pipelines, where in the bootsrapping phase we would pass FIND_ANY in case of compiler-agnostic bootstrapping, while in the spec building phase we would pass INSTALL_MISSING in order to get spack to use the compiler which was built in the previous phase and is now sitting in the binary mirror.

Parameters
  • compiler_action (str) – ‘FIND_ANY’, ‘INSTALL_MISSING’ have meanings described above. Any other value essentially results in a no-op.

  • scope (spack.config.ConfigScope) – Optional. The scope in which to look for compilers, in case ‘FIND_ANY’ was provided.

spack.ci.copy_files_to_artifacts(src, artifacts_dir)[source]

Copy file(s) to the given artifacts directory

Parameters
  • src (str) – the glob-friendly path expression for the file(s) to copy

  • artifacts_dir (str) – the destination directory

spack.ci.copy_stage_logs_to_artifacts(job_spec, job_log_dir)[source]

Copy selected build stage file(s) to the given artifacts directory

Looks for spack-build-out.txt in the stage directory of the given job_spec, and attempts to copy the file into the directory given by job_log_dir.

Parameters
  • job_spec (spack.spec.Spec) – spec associated with spack install log

  • job_log_dir (str) – path into which build log should be copied

spack.ci.copy_test_logs_to_artifacts(test_stage, job_test_dir)[source]

Copy test log file(s) to the given artifacts directory

Parameters
  • test_stage (str) – test stage path

  • job_test_dir (str) – the destination artifacts test directory

spack.ci.create_buildcache(**kwargs)[source]

Create the buildcache at the provided mirror(s).

Parameters

kwargs (dict) – dictionary of arguments used to create the buildcache

List of recognized keys:

  • “env” (spack.environment.Environment): the active environment

  • “buildcache_mirror_url” (str or None): URL for the buildcache mirror

  • “pipeline_mirror_url” (str or None): URL for the pipeline mirror

  • “pr_pipeline” (bool): True if the CI job is for a PR

  • “json_path” (str): path the the spec’s JSON file

spack.ci.display_broken_spec_messages(base_url, hashes)[source]

Fetch the broken spec file for each of the hashes under the base_url and print a message with some details about each one.

spack.ci.download_and_extract_artifacts(url, work_dir)[source]
Look for gitlab artifacts.zip at the given url, and attempt to download

and extract the contents into the given work_dir

Parameters
  • url (str) – Complete url to artifacts.zip file

  • work_dir (str) – Path to destination where artifacts should be extracted

spack.ci.generate_gitlab_ci_yaml(env, print_summary, output_file, prune_dag=False, check_index_only=False, run_optimizer=False, use_dependencies=False, artifacts_root=None, remote_mirror_override=None)[source]
Generate a gitlab yaml file to run a dynamic child pipeline from

the spec matrix in the active environment.

Parameters
  • env (spack.environment.Environment) – Activated environment object which must contain a gitlab-ci section describing how to map specs to runners

  • print_summary (bool) – Should we print a summary of all the jobs in the stages in which they were placed.

  • output_file (str) – File path where generated file should be written

  • prune_dag (bool) – If True, do not generate jobs for specs already exist built on the mirror.

  • check_index_only (bool) – If True, attempt to fetch the mirror index and only use that to determine whether built specs on the mirror this mode results in faster yaml generation time). Otherwise, also check each spec directly by url (useful if there is no index or it might be out of date).

  • run_optimizer (bool) – If True, post-process the generated yaml to try try to reduce the size (attempts to collect repeated configuration and replace with definitions).)

  • use_dependencies (bool) – If true, use “dependencies” rather than “needs” (“needs” allows DAG scheduling). Useful if gitlab instance cannot be configured to handle more than a few “needs” per job.

  • artifacts_root (str) – Path where artifacts like logs, environment files (spack.yaml, spack.lock), etc should be written. GitLab requires this to be within the project directory.

  • remote_mirror_override (str) – Typically only needed when one spack.yaml is used to populate several mirrors with binaries, based on some criteria. Spack protected pipelines populate different mirrors based on branch name, facilitated by this option.

spack.ci.get_change_revisions()[source]

If this is a git repo get the revisions to use when checking for changed packages and spack core modules.

spack.ci.get_job_name(phase, strip_compiler, spec, osarch, build_group)[source]

Given the necessary parts, format the gitlab job name

Parameters
  • phase (str) – Either ‘specs’ for the main phase, or the name of a bootstrapping phase

  • strip_compiler (bool) – Should compiler be stripped from job name

  • spec (spack.spec.Spec) – Spec job will build

  • osarch – Architecture TODO: (this is a spack.spec.ArchSpec, but sphinx doesn’t recognize the type and fails).

  • build_group (str) – Name of build group this job belongs to (a CDash

  • notion)

Returns: The job name

spack.ci.get_spack_info()[source]

If spack is running from a git repo, return the most recent git log entry, otherwise, return a string containing the spack version.

spack.ci.get_spec_filter_list(env, affected_pkgs)[source]
Given a list of package names and an active/concretized

environment, return the set of all concrete specs from the environment that could have been affected by changing the list of packages.

Parameters
Returns

A set of concrete specs from the active environment including those associated with affected packages, their dependencies and dependents, as well as their dependents dependencies.

spack.ci.get_stack_changed(env_path, rev1='HEAD^', rev2='HEAD')[source]

Given an environment manifest path and two revisions to compare, return whether or not the stack was changed. Returns True if the environment manifest changed between the provided revisions (or additionally if the .gitlab-ci.yml file itself changed). Returns False otherwise.

spack.ci.import_signing_key(base64_signing_key)[source]
Given Base64-encoded gpg key, decode and import it to use for

signing packages.

Parameters
  • base64_signing_key (str) – A gpg key including the secret key, armor-exported and base64 encoded, so it can be stored in a gitlab CI variable. For an example of how to generate such a key, see:

  • https – //github.com/spack/spack-infrastructure/blob/main/gitlab-docker/files/gen-key

spack.ci.process_command(name, commands, repro_dir)[source]

Create a script for and run the command. Copy the script to the reproducibility directory.

Parameters
  • name (str) – name of the command being processed

  • commands (list) – list of arguments for single command or list of lists of arguments for multiple commands. No shell escape is performed.

  • repro_dir (str) – Job reproducibility directory

Returns: the exit code from processing the command

spack.ci.push_mirror_contents(env, specfile_path, mirror_url, sign_binaries)[source]

Push one or more binary packages to the mirror.

Parameters
  • env (spack.environment.Environment) – Optional environment. If provided, it is used to make sure binary package to push exists in the environment.

  • specfile_path (str) – Path to spec.json corresponding to built pkg to push.

  • mirror_url (str) – Base url of target mirror

  • sign_binaries (bool) – If True, spack will attempt to sign binary package before pushing.

spack.ci.read_broken_spec(broken_spec_url)[source]

Read data from broken specs file located at the url, return as a yaml object.

spack.ci.remove_other_mirrors(mirrors_to_keep, scope=None)[source]

Remove all mirrors from the given config scope, the exceptions being any listed in in mirrors_to_keep, which is a list of mirror urls.

spack.ci.reproduce_ci_job(url, work_dir)[source]

Given a url to gitlab artifacts.zip from a failed ‘spack ci rebuild’ job, attempt to setup an environment in which the failure can be reproduced locally. This entails the following:

First download and extract artifacts. Then look through those artifacts to glean some information needed for the reproduer (e.g. one of the artifacts contains information about the version of spack tested by gitlab, another is the generated pipeline yaml containing details of the job like the docker image used to run it). The output of this function is a set of printed instructions for running docker and then commands to run to reproduce the build once inside the container.

spack.ci.run_standalone_tests(**kwargs)[source]

Run stand-alone tests on the current spec.

Parameters

kwargs (dict) – dictionary of arguments used to run the tests

List of recognized keys:

  • “cdash” (CDashHandler): (optional) cdash handler instance

  • “fail_fast” (bool): (optional) terminate tests after the first failure

  • “log_file” (str): (optional) test log file name if NOT CDash reporting

  • “job_spec” (Spec): spec that was built

  • “repro_dir” (str): reproduction directory

spack.ci.setup_spack_repro_version(repro_dir, checkout_commit, merge_commit=None)[source]
Look in the local spack clone to find the checkout_commit, and if

provided, the merge_commit given as arguments. If those commits can be found locally, then clone spack and attempt to recreate a merge commit with the same parent commits as tested in gitlab. This looks something like 1) git clone repo && cd repo 2) git checkout <checkout_commit> 3) git merge <merge_commit>. If there is no merge_commit provided, then skip step (3).

Parameters
  • repro_dir (str) – Location where spack should be cloned

  • checkout_commit (str) – SHA of PR branch commit

  • merge_commit (str) – SHA of target branch parent

Returns: True if git repo state was successfully recreated, or False

otherwise.

spack.ci.stage_spec_jobs(specs, check_index_only=False, mirrors_to_check=None)[source]
Take a set of release specs and generate a list of “stages”, where the

jobs in any stage are dependent only on jobs in previous stages. This allows us to maximize build parallelism within the gitlab-ci framework.

Parameters
  • specs (Iterable) – Specs to build

  • check_index_only (bool) – Regardless of whether DAG pruning is enabled, all configured mirrors are searched to see if binaries for specs are up to date on those mirrors. This flag limits that search to the binary cache indices on those mirrors to speed the process up, even though there is no garantee the index is up to date.

  • mirrors_to_checK – Optional mapping giving mirrors to check instead of any configured mirrors.

Returns: A tuple of information objects describing the specs, dependencies

and stages:

spec_labels: A dictionary mapping the spec labels which are made of

(pkg-name/hash-prefix), to objects containing “spec” and “needs_rebuild” keys. The root spec is the spec of which this spec is a dependency and the spec is the formatted spec string for this spec.

deps: A dictionary where the keys should also have appeared as keys in

the spec_labels dictionary, and the values are the set of dependencies for that spec.

stages: An ordered list of sets, each of which contains all the jobs to

built in that stage. The jobs are expressed in the same format as the keys in the spec_labels and deps objects.

spack.ci.write_broken_spec(url, pkg_name, stack_name, job_url, pipeline_url, spec_dict)[source]

Given a url to write to and the details of the failed job, write an entry in the broken specs list.

spack.ci_needs_workaround module

spack.ci_needs_workaround.convert_job(job_entry)[source]
spack.ci_needs_workaround.get_job_name(needs_entry)
spack.ci_needs_workaround.needs_to_dependencies(yaml)[source]

spack.ci_optimization module

spack.ci_optimization.add_extends(yaml, key)[source]

Modifies the given object “yaml” so that it includes an “extends” key whose value features “key”.

If “extends” is not in yaml, then yaml is modified such that yaml[“extends”] == key.

If yaml[“extends”] is a str, then yaml is modified such that yaml[“extends”] == [yaml[“extends”], key]

If yaml[“extends”] is a list that does not include key, then key is appended to the list.

Otherwise, yaml is left unchanged.

spack.ci_optimization.build_histogram(iterator, key)[source]

Builds a histogram of values given an iterable of mappings and a key.

For each mapping “m” with key “key” in iterator, the value m[key] is considered.

Returns a list of tuples (hash, count, proportion, value), where

  • “hash” is a sha1sum hash of the value.

  • “count” is the number of occurences of values that hash to “hash”.

  • “proportion” is the proportion of all values considered above that hash to “hash”.

  • “value” is one of the values considered above that hash to “hash”. Which value is chosen when multiple values hash to the same “hash” is undefined.

The list is sorted in descending order by count, yielding the most frequently occuring hashes first.

spack.ci_optimization.common_subobject(yaml, sub)[source]

Factor prototype object “sub” out of the values of mapping “yaml”.

Consider a modified copy of yaml, “new”, where for each key, “key” in yaml:

  • If yaml[key] matches sub, then new[key] = subkeys(yaml[key], sub).

  • Otherwise, new[key] = yaml[key].

If the above match criteria is not satisfied for any such key, then (yaml, None) is returned. The yaml object is returned unchanged.

Otherwise, each matching value in new is modified as in add_extends(new[key], common_key), and then new[common_key] is set to sub. The common_key value is chosen such that it does not match any preexisting key in new. In this case, (new, common_key) is returned.

spack.ci_optimization.matches(obj, proto)[source]

Returns True if the test object “obj” matches the prototype object “proto”.

If obj and proto are mappings, obj matches proto if (key in obj) and (obj[key] matches proto[key]) for every key in proto.

If obj and proto are sequences, obj matches proto if they are of the same length and (a matches b) for every (a,b) in zip(obj, proto).

Otherwise, obj matches proto if obj == proto.

Precondition: proto must not have any reference cycles

spack.ci_optimization.optimizer(yaml)[source]
spack.ci_optimization.print_delta(name, old, new, applied=None)[source]
spack.ci_optimization.sort_yaml_obj(obj)[source]
spack.ci_optimization.subkeys(obj, proto)[source]

Returns the test mapping “obj” after factoring out the items it has in common with the prototype mapping “proto”.

Consider a recursive merge operation, merge(a, b) on mappings a and b, that returns a mapping, m, whose keys are the union of the keys of a and b, and for every such key, “k”, its corresponding value is:

  • merge(a[key], b[key]) if a[key] and b[key] are mappings, or

  • b[key] if (key in b) and not matches(a[key], b[key]),

    or

  • a[key] otherwise

If obj and proto are mappings, the returned object is the smallest object, “a”, such that merge(a, proto) matches obj.

Otherwise, obj is returned.

spack.ci_optimization.try_optimization_pass(name, yaml, optimization_pass, *args, **kwargs)[source]

Try applying an optimization pass and return information about the result

“name” is a string describing the nature of the pass. If it is a non-empty string, summary statistics are also printed to stdout.

“yaml” is the object to apply the pass to.

“optimization_pass” is the function implementing the pass to be applied.

“args” and “kwargs” are the additional arguments to pass to optimization pass. The pass is applied as

>>> (new_yaml, *other_results) = optimization_pass(yaml, *args, **kwargs)

The pass’s results are greedily rejected if it does not modify the original yaml document, or if it produces a yaml document that serializes to a larger string.

Returns (new_yaml, yaml, applied, other_results) if applied, or (yaml, new_yaml, applied, other_results) otherwise.

spack.compiler module

class spack.compiler.Compiler(cspec, operating_system, target, paths, modules=None, alias=None, environment=None, extra_rpaths=None, enable_implicit_rpaths=None, **kwargs)[source]

Bases: object

This class encapsulates a Spack “compiler”, which includes C, C++, and Fortran compilers. Subclasses should implement support for specific compilers, their possible names, arguments, and how to identify the particular type of compiler.

PrgEnv = None
PrgEnv_compiler = None
property c11_flag
property c99_flag
cc_names = []
property cc_pic_flag

Returns the flag used by the C compiler to produce Position Independent Code (PIC).

property cc_rpath_arg
classmethod cc_version(cc)[source]
compiler_environment()[source]
property cxx11_flag
property cxx14_flag
property cxx17_flag
property cxx98_flag
cxx_names = []
property cxx_pic_flag

Returns the flag used by the C++ compiler to produce Position Independent Code (PIC).

property cxx_rpath_arg
classmethod cxx_version(cxx)[source]
property debug_flags
classmethod default_version(cc)[source]

Override just this to override all compiler version functions.

property disable_new_dtags
property enable_new_dtags
classmethod extract_version_from_output(output)[source]

Extracts the version from compiler’s output.

f77_names = []
property f77_pic_flag

Returns the flag used by the F77 compiler to produce Position Independent Code (PIC).

property f77_rpath_arg
classmethod f77_version(f77)[source]
fc_names = []
property fc_pic_flag

Returns the flag used by the FC compiler to produce Position Independent Code (PIC).

property fc_rpath_arg
classmethod fc_version(fc)[source]
get_real_version()[source]

Query the compiler for its version.

This is the “real” compiler version, regardless of what is in the compilers.yaml file, which the user can change to name their compiler.

Use the runtime environment of the compiler (modules and environment modifications) to enable the compiler to run properly on any platform.

ignore_version_errors = ()

Return values to ignore when invoking the compiler to get its version

implicit_rpaths()[source]
property linker_arg

Flag that need to be used to pass an argument to the linker.

property openmp_flag
property opt_flags
property prefix

Query the compiler for its install prefix. This is the install path as reported by the compiler. Note that paths for cc, cxx, etc are not enough to find the install prefix of the compiler, since the can be symlinks, wrappers, or filenames instead of absolute paths.

prefixes = []
property real_version

Executable reported compiler version used for API-determinations

E.g. C++11 flag checks.

property required_libs

For executables created with this compiler, the compiler libraries that would be generally required to run it.

classmethod search_regexps(language)[source]
setup_custom_environment(pkg, env)[source]

Set any environment variables necessary to use the compiler.

suffixes = ['-.*']
property verbose_flag

This property should be overridden in the compiler subclass if a verbose flag is available.

If it is not overridden, it is assumed to not be supported.

verify_executables()[source]

Raise an error if any of the compiler executables is not valid.

This method confirms that for all of the compilers (cc, cxx, f77, fc) that have paths, those paths exist and are executable by the current user. Raises a CompilerAccessError if any of the non-null paths for the compiler are not accessible.

property version
version_argument = '-dumpversion'

Compiler argument that produces version information

version_regex = '(.*)'

Regex used to extract version from compiler’s output

spack.concretize module

Functions here are used to take abstract specs and make them concrete. For example, if a spec asks for a version between 1.8 and 1.9, these functions might take will take the most recent 1.9 version of the package available. Or, if the user didn’t specify a compiler for a spec, then this will assign a compiler to the spec based on defaults or user preferences.

TODO: make this customizable and allow users to configure

concretization policies.

class spack.concretize.Concretizer(abstract_spec=None)[source]

Bases: object

You can subclass this class to override some of the default concretization strategies, or you can override all of them.

adjust_target(spec)[source]

Adjusts the target microarchitecture if the compiler is too old to support the default one.

Parameters

spec – spec to be concretized

Returns

True if spec was modified, False otherwise

check_for_compiler_existence = None

Controls whether we check that compiler versions actually exist during concretization. Used for testing and for mirror creation

choose_virtual_or_external(spec)[source]

Given a list of candidate virtual and external packages, try to find one that is most ABI compatible.

concretize_architecture(spec)[source]

If the spec is empty provide the defaults of the platform. If the architecture is not a string type, then check if either the platform, target or operating system are concretized. If any of the fields are changed then return True. If everything is concretized (i.e the architecture attribute is a namedtuple of classes) then return False. If the target is a string type, then convert the string into a concretized architecture. If it has no architecture and the root of the DAG has an architecture, then use the root otherwise use the defaults on the platform.

concretize_compiler(spec)[source]

If the spec already has a compiler, we’re done. If not, then take the compiler used for the nearest ancestor with a compiler spec and use that. If the ancestor’s compiler is not concrete, then used the preferred compiler as specified in spackconfig.

Intuition: Use the spackconfig default if no package that depends on this one has a strict compiler requirement. Otherwise, try to build with the compiler that will be used by libraries that link to this one, to maximize compatibility.

concretize_compiler_flags(spec)[source]

The compiler flags are updated to match those of the spec whose compiler is used, defaulting to no compiler flags in the spec. Default specs set at the compiler level will still be added later.

concretize_develop(spec)[source]

Add dev_path=* variant to packages built from local source.

concretize_variants(spec)[source]

If the spec already has variants filled in, return. Otherwise, add the user preferences from packages.yaml or the default variants from the package specification.

concretize_version(spec)[source]

If the spec is already concrete, return. Otherwise take the preferred version from spackconfig, and default to the package’s version if there are no available versions.

TODO: In many cases we probably want to look for installed

versions of each package and use an installed version if we can link to it. The policy implemented here will tend to rebuild a lot of stuff becasue it will prefer a compiler in the spec to any compiler already- installed things were built with. There is likely some better policy that finds some middle ground between these two extremes.

target_from_package_preferences(spec)[source]

Returns the preferred target from the package preferences if there’s any.

Parameters

spec – abstract spec to be concretized

exception spack.concretize.InsufficientArchitectureInfoError(spec, archs)[source]

Bases: SpackError

Raised when details on architecture cannot be collected from the system

exception spack.concretize.NoBuildError(spec)[source]

Bases: SpecError

Raised when a package is configured with the buildable option False, but no satisfactory external versions can be found

exception spack.concretize.NoCompilersForArchError(arch, available_os_targets)[source]

Bases: SpackError

exception spack.concretize.NoValidVersionError(spec)[source]

Bases: SpackError

Raised when there is no way to have a concrete version for a particular spec.

exception spack.concretize.UnavailableCompilerVersionError(compiler_spec, arch=None)[source]

Bases: SpackError

Raised when there is no available compiler that satisfies a compiler spec.

spack.concretize.concretize_specs_together(*abstract_specs, **kwargs)[source]

Given a number of specs as input, tries to concretize them together.

Parameters
  • tests (bool or list or set) – False to run no tests, True to test all packages, or a list of package names to run tests for some

  • *abstract_specs – abstract specs to be concretized, given either as Specs or strings

Returns

List of concretized specs

spack.concretize.disable_compiler_existence_check()[source]
spack.concretize.enable_compiler_existence_check()[source]
spack.concretize.find_spec(spec, condition, default=None)[source]

Searches the dag from spec in an intelligent order and looks for a spec that matches a condition

class spack.concretize.reverse_order(value)[source]

Bases: object

Helper for creating key functions.

This is a wrapper that inverts the sense of the natural comparisons on the object.

spack.config module

This module implements Spack’s configuration file handling.

This implements Spack’s configuration system, which handles merging multiple scopes with different levels of precedence. See the documentation on Configuration Scopes for details on how Spack’s configuration system behaves. The scopes are:

  1. default

  2. system

  3. site

  4. user

And corresponding per-platform scopes. Important functions in this module are:

get_config reads in YAML data for a particular scope and returns it. Callers can then modify the data and write it back with update_config.

When read in, Spack validates configurations with jsonschemas. The schemas are in submodules of spack.schema.

exception spack.config.ConfigError(message, long_message=None)[source]

Bases: SpackError

Superclass for all Spack config related errors.

exception spack.config.ConfigFileError(message, long_message=None)[source]

Bases: ConfigError

Issue reading or accessing a configuration file.

exception spack.config.ConfigFormatError(validation_error, data, filename=None, line=None)[source]

Bases: ConfigError

Raised when a configuration format does not match its schema.

class spack.config.ConfigScope(name, path)[source]

Bases: object

This class represents a configuration scope.

A scope is one directory containing named configuration files. Each file is a config “section” (e.g., mirrors, compilers, etc).

clear()[source]

Empty cached config information.

get_section(section)[source]
get_section_filename(section)[source]
property is_platform_dependent
exception spack.config.ConfigSectionError(message, long_message=None)[source]

Bases: ConfigError

Error for referring to a bad config section name in a configuration.

class spack.config.Configuration(*scopes)[source]

Bases: object

A full Spack configuration, from a hierarchy of config files.

This class makes it easy to add a new scope on top of an existing one.

clear_caches()[source]

Clears the caches for configuration files,

This will cause files to be re-read upon the next request.

property file_scopes

List of writable scopes with an associated file.

get(path, default=None, scope=None)[source]

Get a config section or a single value from one.

Accepts a path syntax that allows us to grab nested config map entries. Getting the ‘config’ section would look like:

spack.config.get('config')

and the dirty section in the config scope would be:

spack.config.get('config:dirty')

We use : as the separator, like YAML objects.

get_config(section, scope=None)[source]

Get configuration settings for a section.

If scope is None or not provided, return the merged contents of all of Spack’s configuration scopes. If scope is provided, return only the configuration as specified in that scope.

This off the top-level name from the YAML section. That is, for a YAML config file that looks like this:

config:
  install_tree:
    root: $spack/opt/spack
  build_stage:
  - $tmpdir/$user/spack-stage

get_config('config') will return:

{ 'install_tree': {
      'root': '$spack/opt/spack',
  }
  'build_stage': ['$tmpdir/$user/spack-stage']
}
get_config_filename(scope, section)[source]

For some scope and section, get the name of the configuration file.

highest_precedence_non_platform_scope()[source]

Non-internal non-platform scope with highest precedence

Platform-specific scopes are of the form scope/platform

highest_precedence_scope()[source]

Non-internal scope with highest precedence.

matching_scopes(reg_expr)[source]

List of all scopes whose names match the provided regular expression.

For example, matching_scopes(r’^command’) will return all scopes whose names begin with command.

pop_scope()[source]

Remove the highest precedence scope and return it.

print_section(section, blame=False)[source]

Print a configuration to stdout.

push_scope(scope)[source]

Add a higher precedence scope to the Configuration.

remove_scope(scope_name)[source]

Remove scope by name; has no effect when scope_name does not exist

set(path, value, scope=None)[source]

Convenience function for setting single values in config files.

Accepts the path syntax described in get().

update_config(section, update_data, scope=None, force=False)[source]

Update the configuration file for a particular scope.

Overwrites contents of a section in a scope with update_data, then writes out the config file.

update_data should have the top-level section name stripped off (it will be re-added). Data itself can be a list, dict, or any other yaml-ish structure.

Configuration scopes that are still written in an old schema format will fail to update unless force is True.

Parameters
  • section (str) – section of the configuration to be updated

  • update_data (dict) – data to be used for the update

  • scope (str) – scope to be updated

  • force (str) – force the update

class spack.config.ImmutableConfigScope(name, path)[source]

Bases: ConfigScope

A configuration scope that cannot be written to.

This is used for ConfigScopes passed on the command line.

class spack.config.InternalConfigScope(name, data=None)[source]

Bases: ConfigScope

An internal configuration scope that is not persisted to a file.

This is for spack internal use so that command-line options and config file settings are accessed the same way, and Spack can easily override settings from files.

clear()[source]

Empty cached config information.

get_section(section)[source]

Just reads from an internal dictionary.

get_section_filename(section)[source]
class spack.config.SingleFileScope(name, path, schema, yaml_path=None)[source]

Bases: ConfigScope

This class represents a configuration scope in a single YAML file.

get_section(section)[source]
get_section_filename(section)[source]
property is_platform_dependent
spack.config.add(fullpath, scope=None)[source]

Add the given configuration to the specified config scope. Add accepts a path. If you want to add from a filename, use add_from_file

spack.config.add_default_platform_scope(platform)[source]
spack.config.add_from_file(filename, scope=None)[source]

Add updates to a config from a filename

spack.config.collect_urls(base_url)[source]

Return a list of configuration URLs.

Parameters

base_url (str) – URL for a configuration (yaml) file or a directory containing yaml file(s)

Returns: (list) list of configuration file(s) or empty list if none

spack.config.command_line_scopes = []

configuration scopes added on the command line set by spack.main.main().

spack.config.config = <spack.config.Configuration object>

This is the singleton configuration instance for Spack.

spack.config.config_defaults = {'config': {'build_jobs': 2, 'build_stage': '$tempdir/spack-stage', 'checksum': True, 'concretizer': 'clingo', 'connect_timeout': 10, 'debug': False, 'dirty': False, 'license_dir': '/home/docs/checkouts/readthedocs.org/user_builds/spack/checkouts/v0.19.1/lib/spack/docs/_spack_root/etc/spack/licenses', 'verify_ssl': True}}

Hard-coded default values for some key configuration options. This ensures that Spack will still work even if config.yaml in the defaults scope is removed.

spack.config.configuration_defaults_path = ('defaults', '/home/docs/checkouts/readthedocs.org/user_builds/spack/checkouts/v0.19.1/lib/spack/docs/_spack_root/etc/spack/defaults')

Path to the default configuration

spack.config.default_list_scope()[source]

Return the config scope that is listed by default.

Commands that list configuration list all scopes (merged) by default.

spack.config.default_modify_scope(section='config')[source]

Return the config scope that commands should modify by default.

Commands that modify configuration by default modify the highest priority scope.

Parameters

section (bool) – Section for which to get the default scope. If this is not ‘compilers’, a general (non-platform) scope is used.

spack.config.ensure_latest_format_fn(section)[source]

Return a function that takes as input a dictionary read from a configuration file and update it to the latest format.

The function returns True if there was any update, False otherwise.

Parameters

section (str) – section of the configuration e.g. “packages”, “config”, etc.

spack.config.fetch_remote_configs(url, dest_dir, skip_existing=True)[source]

Retrieve configuration file(s) at the specified URL.

Parameters
  • url (str) – URL for a configuration (yaml) file or a directory containing yaml file(s)

  • dest_dir (str) – destination directory

  • skip_existing (bool) – Skip files that already exist in dest_dir if True; otherwise, replace those files

Returns: (str) path to the corresponding file if URL is or contains a

single file and it is the only file in the destination directory or the root (dest_dir) directory if multiple configuration files exist or are retrieved.

spack.config.first_existing(dictionary, keys)[source]

Get the value of the first key in keys that is in the dictionary.

spack.config.get(path, default=None, scope=None)[source]

Module-level wrapper for Configuration.get().

spack.config.get_valid_type(path)[source]

Returns an instance of a type that will pass validation for path.

The instance is created by calling the constructor with no arguments. If multiple types will satisfy validation for data at the configuration path given, the priority order is list, dict, str, bool, int, float.

spack.config.merge_yaml(dest, source)[source]

Merges source into dest; entries in source take precedence over dest.

This routine may modify dest and should be assigned to dest, in case dest was None to begin with, e.g.:

dest = merge_yaml(dest, source)

In the result, elements from lists from source will appear before elements of lists from dest. Likewise, when iterating over keys or items in merged OrderedDict objects, keys from source will appear before keys from dest.

Config file authors can optionally end any attribute in a dict with :: instead of :, and the key will override that of the parent instead of merging.

spack.config.override(path_or_scope, value=None)[source]

Simple way to override config settings within a context.

Parameters
  • path_or_scope (ConfigScope or str) – scope or single option to override

  • value (object or None) – value for the single option

Temporarily push a scope on the current configuration, then remove it after the context completes. If a single option is provided, create an internal config scope for it and push/pop that scope.

spack.config.overrides_base_name = 'overrides-'

Base name for the (internal) overrides scope.

spack.config.process_config_path(path)[source]
spack.config.raw_github_gitlab_url(url)[source]

Transform a github URL to the raw form to avoid undesirable html.

Parameters

url – url to be converted to raw form

Returns: (str) raw github/gitlab url or the original url

spack.config.read_config_file(filename, schema=None)[source]

Read a YAML configuration file.

User can provide a schema for validation. If no schema is provided, we will infer the schema from the top-level key.

spack.config.scopes()[source]

Convenience function to get list of configuration scopes.

spack.config.scopes_metavar = '{defaults,system,site,user}[/PLATFORM] or env:ENVIRONMENT'

metavar to use for commands that accept scopes this is shorter and more readable than listing all choices

spack.config.section_schemas = {'bootstrap': {'$schema': 'http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'bootstrap': {'properties': {'enable': {'type': 'boolean'}, 'root': {'type': 'string'}, 'sources': {'items': {'additionalProperties': False, 'properties': {'metadata': {'type': 'string'}, 'name': {'type': 'string'}}, 'required': ['name', 'metadata'], 'type': 'object'}, 'type': 'array'}, 'trusted': {'patternProperties': {'\\w[\\w-]*': {'type': 'boolean'}}, 'type': 'object'}}, 'type': 'object'}}, 'title': 'Spack bootstrap configuration file schema', 'type': 'object'}, 'compilers': {'$schema': 'http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'compilers': {'items': [{'type': 'object', 'additionalProperties': False, 'properties': {'compiler': {'type': 'object', 'additionalProperties': False, 'required': ['paths', 'spec', 'modules', 'operating_system'], 'properties': {'paths': {'type': 'object', 'required': ['cc', 'cxx', 'f77', 'fc'], 'additionalProperties': False, 'properties': {'cc': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cxx': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'f77': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'fc': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}}}, 'flags': {'type': 'object', 'additionalProperties': False, 'properties': {'cflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cxxflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'fflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cppflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'ldflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'ldlibs': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}}}, 'spec': {'type': 'string'}, 'operating_system': {'type': 'string'}, 'target': {'type': 'string'}, 'alias': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'modules': {'anyOf': [{'type': 'string'}, {'type': 'null'}, {'type': 'array'}]}, 'implicit_rpaths': {'anyOf': [{'type': 'array', 'items': {'type': 'string'}}, {'type': 'boolean'}]}, 'environment': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'set': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'unset': {'type': 'array', 'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'prepend_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'append_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'remove_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}}}, 'extra_rpaths': {'type': 'array', 'default': [], 'items': {'type': 'string'}}}}}}], 'type': 'array'}}, 'title': 'Spack compiler configuration file schema', 'type': 'object'}, 'concretizer': {'$schema': 'http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'concretizer': {'additionalProperties': False, 'properties': {'reuse': {'type': 'boolean'}, 'targets': {'properties': {'granularity': {'enum': ['generic', 'microarchitectures'], 'type': 'string'}, 'host_compatible': {'type': 'boolean'}}, 'type': 'object'}, 'unify': {'oneOf': [{'type': 'boolean'}, {'type': 'string', 'enum': ['when_possible']}]}}, 'type': 'object'}}, 'title': 'Spack concretizer configuration file schema', 'type': 'object'}, 'config': {'$schema': 'http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'config': {'default': {}, 'deprecatedProperties': {'error': False, 'message': 'config:module_roots has been replaced by modules:[module set]:roots and is ignored', 'properties': ['module_roots']}, 'properties': {'additional_external_search_paths': {'items': {'type': 'string'}, 'type': 'array'}, 'allow_sgid': {'type': 'boolean'}, 'binary_index_root': {'type': 'string'}, 'binary_index_ttl': {'minimum': 0, 'type': 'integer'}, 'build_jobs': {'minimum': 1, 'type': 'integer'}, 'build_language': {'type': 'string'}, 'build_stage': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}, 'ccache': {'type': 'boolean'}, 'checksum': {'type': 'boolean'}, 'concretizer': {'enum': ['original', 'clingo'], 'type': 'string'}, 'connect_timeout': {'minimum': 0, 'type': 'integer'}, 'db_lock_timeout': {'minimum': 1, 'type': 'integer'}, 'debug': {'type': 'boolean'}, 'deprecated': {'type': 'boolean'}, 'dirty': {'type': 'boolean'}, 'extensions': {'items': {'type': 'string'}, 'type': 'array'}, 'install_hash_length': {'minimum': 1, 'type': 'integer'}, 'install_missing_compilers': {'type': 'boolean'}, 'install_path_scheme': {'type': 'string'}, 'install_tree': {'anyOf': [{'type': 'object', 'properties': {'root': {'type': 'string'}, 'padded_length': {'oneOf': [{'type': 'integer', 'minimum': 0}, {'type': 'boolean'}]}, 'projections': {'type': 'object', 'patternProperties': {'all|\\w[\\w-]*': {'type': 'string'}}}}}, {'type': 'string'}]}, 'license_dir': {'type': 'string'}, 'locks': {'type': 'boolean'}, 'misc_cache': {'type': 'string'}, 'package_lock_timeout': {'anyOf': [{'type': 'integer', 'minimum': 1}, {'type': 'null'}]}, 'shared_linking': {'anyOf': [{'type': 'string', 'enum': ['rpath', 'runpath']}, {'type': 'object', 'properties': {'type': {'type': 'string', 'enum': ['rpath', 'runpath']}, 'bind': {'type': 'boolean'}}}]}, 'source_cache': {'type': 'string'}, 'suppress_gpg_warnings': {'type': 'boolean'}, 'template_dirs': {'items': {'type': 'string'}, 'type': 'array'}, 'test_stage': {'type': 'string'}, 'url_fetch_method': {'enum': ['urllib', 'curl'], 'type': 'string'}, 'verify_ssl': {'type': 'boolean'}}, 'type': 'object'}}, 'title': 'Spack core configuration file schema', 'type': 'object'}, 'mirrors': {'$schema': 'http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'mirrors': {'additionalProperties': False, 'default': {}, 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'object', 'required': ['fetch', 'push'], 'properties': {'fetch': {'type': ['string', 'object']}, 'push': {'type': ['string', 'object']}}}]}}, 'type': 'object'}}, 'title': 'Spack mirror configuration file schema', 'type': 'object'}, 'modules': {'$schema': 'http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'modules': {'additionalProperties': False, 'patternProperties': {'^(?!prefix_inspections$)\\w[\\w-]*$': {'additionalProperties': False, 'default': {}, 'properties': {'arch_folder': {'type': 'boolean'}, 'enable': {'default': [], 'items': {'enum': ['tcl', 'lmod'], 'type': 'string'}, 'type': 'array'}, 'lmod': {'allOf': [{'type': 'object', 'default': {}, 'allOf': [{'properties': {'verbose': {'type': 'boolean', 'default': False}, 'hash_length': {'type': 'integer', 'minimum': 0, 'default': 7}, 'whitelist': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'blacklist': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'blacklist_implicits': {'type': 'boolean', 'default': False}, 'include': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'exclude': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'exclude_implicits': {'type': 'boolean', 'default': False}, 'defaults': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'naming_scheme': {'type': 'string'}, 'projections': {'type': 'object', 'patternProperties': {'all|\\w[\\w-]*': {'type': 'string'}}}, 'all': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'filter': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'environment_blacklist': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'exclude_env_vars': {'type': 'array', 'default': [], 'items': {'type': 'string'}}}}, 'template': {'type': 'string'}, 'autoload': {'type': 'string', 'enum': ['none', 'direct', 'all']}, 'prerequisites': {'type': 'string', 'enum': ['none', 'direct', 'all']}, 'conflict': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'load': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'suffixes': {'type': 'object', 'validate_spec': True, 'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}}, 'environment': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'set': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'unset': {'type': 'array', 'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'prepend_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'append_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'remove_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}}}}}}}, {'validate_spec': True, 'patternProperties': {'(?!hierarchy|core_specs|verbose|hash_length|defaults|whitelist|blacklist|include|exclude|projections|naming_scheme|core_compilers|all)(^\\w[\\w-]*)': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'filter': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'environment_blacklist': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'exclude_env_vars': {'type': 'array', 'default': [], 'items': {'type': 'string'}}}}, 'template': {'type': 'string'}, 'autoload': {'type': 'string', 'enum': ['none', 'direct', 'all']}, 'prerequisites': {'type': 'string', 'enum': ['none', 'direct', 'all']}, 'conflict': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'load': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'suffixes': {'type': 'object', 'validate_spec': True, 'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}}, 'environment': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'set': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'unset': {'type': 'array', 'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'prepend_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'append_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'remove_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}}}}}, '^[\\^@%+~]': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'filter': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'environment_blacklist': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'exclude_env_vars': {'type': 'array', 'default': [], 'items': {'type': 'string'}}}}, 'template': {'type': 'string'}, 'autoload': {'type': 'string', 'enum': ['none', 'direct', 'all']}, 'prerequisites': {'type': 'string', 'enum': ['none', 'direct', 'all']}, 'conflict': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'load': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'suffixes': {'type': 'object', 'validate_spec': True, 'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}}, 'environment': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'set': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'unset': {'type': 'array', 'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'prepend_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'append_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'remove_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}}}}}}}]}, {'type': 'object', 'properties': {'core_compilers': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'hierarchy': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'core_specs': {'type': 'array', 'default': [], 'items': {'type': 'string'}}}}]}, 'prefix_inspections': {'additionalProperties': False, 'patternProperties': {'^[\\w-]*': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}, 'roots': {'properties': {'lmod': {'type': 'string'}, 'tcl': {'type': 'string'}}, 'type': 'object'}, 'tcl': {'allOf': [{'type': 'object', 'default': {}, 'allOf': [{'properties': {'verbose': {'type': 'boolean', 'default': False}, 'hash_length': {'type': 'integer', 'minimum': 0, 'default': 7}, 'whitelist': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'blacklist': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'blacklist_implicits': {'type': 'boolean', 'default': False}, 'include': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'exclude': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'exclude_implicits': {'type': 'boolean', 'default': False}, 'defaults': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'naming_scheme': {'type': 'string'}, 'projections': {'type': 'object', 'patternProperties': {'all|\\w[\\w-]*': {'type': 'string'}}}, 'all': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'filter': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'environment_blacklist': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'exclude_env_vars': {'type': 'array', 'default': [], 'items': {'type': 'string'}}}}, 'template': {'type': 'string'}, 'autoload': {'type': 'string', 'enum': ['none', 'direct', 'all']}, 'prerequisites': {'type': 'string', 'enum': ['none', 'direct', 'all']}, 'conflict': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'load': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'suffixes': {'type': 'object', 'validate_spec': True, 'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}}, 'environment': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'set': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'unset': {'type': 'array', 'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'prepend_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'append_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'remove_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}}}}}}}, {'validate_spec': True, 'patternProperties': {'(?!hierarchy|core_specs|verbose|hash_length|defaults|whitelist|blacklist|include|exclude|projections|naming_scheme|core_compilers|all)(^\\w[\\w-]*)': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'filter': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'environment_blacklist': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'exclude_env_vars': {'type': 'array', 'default': [], 'items': {'type': 'string'}}}}, 'template': {'type': 'string'}, 'autoload': {'type': 'string', 'enum': ['none', 'direct', 'all']}, 'prerequisites': {'type': 'string', 'enum': ['none', 'direct', 'all']}, 'conflict': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'load': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'suffixes': {'type': 'object', 'validate_spec': True, 'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}}, 'environment': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'set': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'unset': {'type': 'array', 'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'prepend_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'append_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'remove_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}}}}}, '^[\\^@%+~]': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'filter': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'environment_blacklist': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'exclude_env_vars': {'type': 'array', 'default': [], 'items': {'type': 'string'}}}}, 'template': {'type': 'string'}, 'autoload': {'type': 'string', 'enum': ['none', 'direct', 'all']}, 'prerequisites': {'type': 'string', 'enum': ['none', 'direct', 'all']}, 'conflict': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'load': {'type': 'array', 'default': [], 'items': {'type': 'string'}}, 'suffixes': {'type': 'object', 'validate_spec': True, 'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}}, 'environment': {'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': {'set': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'unset': {'type': 'array', 'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'prepend_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'append_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}, 'remove_path': {'type': 'object', 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}}}}}}}}]}, {}]}, 'use_view': {'anyOf': [{'type': 'string'}, {'type': 'boolean'}]}}, 'type': 'object'}}, 'properties': {'prefix_inspections': {'additionalProperties': False, 'patternProperties': {'^[\\w-]*': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}}, 'type': 'object'}}, 'title': 'Spack module file configuration file schema', 'type': 'object'}, 'packages': {'$schema': 'http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'packages': {'additionalProperties': False, 'default': {}, 'patternProperties': {'\\w[\\w-]*': {'additionalProperties': False, 'default': {}, 'properties': {'buildable': {'default': True, 'type': 'boolean'}, 'compiler': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'externals': {'items': {'additionalProperties': True, 'properties': {'extra_attributes': {'type': 'object'}, 'modules': {'items': {'type': 'string'}, 'type': 'array'}, 'prefix': {'type': 'string'}, 'spec': {'type': 'string'}}, 'required': ['spec'], 'type': 'object'}, 'type': 'array'}, 'package_attributes': {'additionalProperties': False, 'patternProperties': {'\\w+': {}}, 'type': 'object'}, 'permissions': {'additionalProperties': False, 'properties': {'group': {'type': 'string'}, 'read': {'enum': ['user', 'group', 'world'], 'type': 'string'}, 'write': {'enum': ['user', 'group', 'world'], 'type': 'string'}}, 'type': 'object'}, 'providers': {'additionalProperties': False, 'default': {}, 'patternProperties': {'\\w[\\w-]*': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}, 'require': {'oneOf': [{'type': 'array', 'items': {'type': 'object', 'properties': {'one_of': {'type': 'array'}, 'any_of': {'type': 'array'}}}}, {'type': 'string'}]}, 'target': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'variants': {'oneOf': [{'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}]}, 'version': {'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}, 'type': 'array'}}, 'type': 'object'}}, 'type': 'object'}}, 'title': 'Spack package configuration file schema', 'type': 'object'}, 'repos': {'$schema': 'http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'repos': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'title': 'Spack repository configuration file schema', 'type': 'object'}, 'upstreams': {'$schema': 'http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'upstreams': {'default': {}, 'patternProperties': {'\\w[\\w-]*': {'additionalProperties': False, 'default': {}, 'properties': {'install_tree': {'type': 'string'}, 'modules': {'properties': {'lmod': {'type': 'string'}, 'tcl': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}}, 'title': 'Spack core configuration file schema', 'type': 'object'}}

Dict from section names -> schema for that section

spack.config.set(path, value, scope=None)[source]

Convenience function for setting single values in config files.

Accepts the path syntax described in get().

spack.config.use_configuration(*scopes_or_paths)[source]

Use the configuration scopes passed as arguments within the context manager.

Parameters

*scopes_or_paths – scope objects or paths to be used

Returns

Configuration object associated with the scopes passed as arguments

spack.config.validate(data, schema, filename=None)[source]

Validate data read in from a Spack YAML file.

Parameters
  • data (dict or list) – data read from a Spack YAML file

  • schema (dict or list) – jsonschema to validate data

This leverages the line information (start_mark, end_mark) stored on Spack YAML structures.

spack.cray_manifest module

exception spack.cray_manifest.ManifestValidationError(msg, long_msg=None)[source]

Bases: SpackError

spack.cray_manifest.compiler_from_entry(entry)[source]
spack.cray_manifest.default_path = '/opt/cray/pe/cpe-descriptive-manifest/'

Cray systems can store a Spack-compatible description of system packages here.

spack.cray_manifest.entries_to_specs(entries)[source]
spack.cray_manifest.read(path, apply_updates)[source]
spack.cray_manifest.spec_from_entry(entry)[source]
spack.cray_manifest.translated_compiler_name(manifest_compiler_name)[source]

When creating a Compiler object, Spack expects a name matching one of the classes in spack.compilers. Names in the Cray manifest may differ; for cases where we know the name refers to a compiler in Spack, this function translates it automatically.

This function will raise an error if there is no recorded translation and the name doesn’t match a known compiler name.

spack.database module

Spack’s installation tracking database.

The database serves two purposes:

  1. It implements a cache on top of a potentially very large Spack directory hierarchy, speeding up many operations that would otherwise require filesystem access.

  2. It will allow us to track external installations as well as lost packages and their dependencies.

Prior to the implementation of this store, a directory layout served as the authoritative database of packages in Spack. This module provides a cache and a sanity checking mechanism for what is in the filesystem.

exception spack.database.CorruptDatabaseError(message, long_message=None)[source]

Bases: SpackError

Raised when errors are found while reading the database.

class spack.database.Database(root, db_dir=None, upstream_dbs=None, is_upstream=False, enable_transaction_locking=True, record_fields=['spec', 'ref_count', 'path', 'installed', 'explicit', 'installation_time', 'deprecated_for'])[source]

Bases: object

Per-process lock objects for each install prefix.

add(spec_like, *args, **kwargs)[source]
clear_all_failures()[source]

Force remove install failure tracking files.

clear_failure(spec, force=False)[source]

Remove any persistent and cached failure tracking for the spec.

see mark_failed().

Parameters
  • spec (spack.spec.Spec) – the spec whose failure indicators are being removed

  • force (bool) – True if the failure information should be cleared when a prefix failure lock exists for the file or False if the failure should not be cleared (e.g., it may be associated with a concurrent build)

db_for_spec_hash(hash_key)[source]
deprecate(spec_like, *args, **kwargs)[source]
deprecator(spec)[source]

Return the spec that the given spec is deprecated for, or None

get_by_hash(dag_hash, default=None, installed=<built-in function any>)[source]

Look up a spec by DAG hash, or by a DAG hash prefix.

Parameters
  • dag_hash (str) – hash (or hash prefix) to look up

  • default (object or None) – default value to return if dag_hash is not in the DB (default: None)

  • installed (bool or InstallStatus or Iterable or None) – if True, includes only installed specs in the search; if False only missing specs, and if any, all specs in database. If an InstallStatus or iterable of InstallStatus, returns specs whose install status (installed, deprecated, or missing) matches (one of) the InstallStatus. (default: any)

installed defaults to any so that we can refer to any known hash. Note that query() and query_one() differ in that they only return installed specs by default.

Returns

a list of specs matching the hash or hash prefix

Return type

(list)

get_by_hash_local(*args, **kwargs)[source]

Look up a spec in this DB by DAG hash, or by a DAG hash prefix.

Parameters
  • dag_hash (str) – hash (or hash prefix) to look up

  • default (object or None) – default value to return if dag_hash is not in the DB (default: None)

  • installed (bool or InstallStatus or Iterable or None) – if True, includes only installed specs in the search; if False only missing specs, and if any, all specs in database. If an InstallStatus or iterable of InstallStatus, returns specs whose install status (installed, deprecated, or missing) matches (one of) the InstallStatus. (default: any)

installed defaults to any so that we can refer to any known hash. Note that query() and query_one() differ in that they only return installed specs by default.

Returns

a list of specs matching the hash or hash prefix

Return type

(list)

get_record(spec_like, *args, **kwargs)[source]
installed_extensions_for(spec_like, *args, **kwargs)[source]
installed_relatives(spec_like, *args, **kwargs)[source]
is_occupied_install_prefix(path)[source]
mark(spec_like, *args, **kwargs)[source]
mark_failed(spec)[source]

Mark a spec as failing to install.

Prefix failure marking takes the form of a byte range lock on the nth byte of a file for coordinating between concurrent parallel build processes and a persistent file, named with the full hash and containing the spec, in a subdirectory of the database to enable persistence across overlapping but separate related build processes.

The failure lock file, spack.store.db.prefix_failures, lives alongside the install DB. n is the sys.maxsize-bit prefix of the associated DAG hash to make the likelihood of collision very low with no cleanup required.

missing(spec)[source]
prefix_failed(spec)[source]

Return True if the prefix (installation) is marked as failed.

prefix_failure_locked(spec)[source]

Return True if a process has a failure lock on the spec.

prefix_failure_marked(spec)[source]

Determine if the spec has a persistent failure marking.

prefix_lock(spec, timeout=None)[source]

Get a lock on a particular spec’s installation directory.

NOTE: The installation directory does not need to exist.

Prefix lock is a byte range lock on the nth byte of a file.

The lock file is spack.store.db.prefix_lock – the DB tells us what to call it and it lives alongside the install DB.

n is the sys.maxsize-bit prefix of the DAG hash. This makes likelihood of collision is very low AND it gives us readers-writer lock semantics with just a single lockfile, so no cleanup required.

prefix_read_lock(spec)[source]
prefix_write_lock(spec)[source]
query(*args, **kwargs)[source]

Query the Spack database including all upstream databases.

Parameters
  • query_spec – queries iterate through specs in the database and return those that satisfy the supplied query_spec. If query_spec is any, This will match all specs in the database. If it is a spec, we’ll evaluate spec.satisfies(query_spec)

  • known (bool or None) – Specs that are “known” are those for which Spack can locate a package.py file – i.e., Spack “knows” how to install them. Specs that are unknown may represent packages that existed in a previous version of Spack, but have since either changed their name or been removed

  • installed (bool or InstallStatus or Iterable or None) – if True, includes only installed specs in the search; if False only missing specs, and if any, all specs in database. If an InstallStatus or iterable of InstallStatus, returns specs whose install status (installed, deprecated, or missing) matches (one of) the InstallStatus. (default: True)

  • explicit (bool or None) – A spec that was installed following a specific user request is marked as explicit. If instead it was pulled-in as a dependency of a user requested spec it’s considered implicit.

  • start_date (datetime.datetime or None) – filters the query discarding specs that have been installed before start_date.

  • end_date (datetime.datetime or None) – filters the query discarding specs that have been installed after end_date.

  • hashes (Container) – list or set of hashes that we can use to restrict the search

  • in_buildcache (bool or None) – Specs that are marked in this database as part of an associated binary cache are in_buildcache. All other specs are not. This field is used for querying mirror indices. Default is any.

Returns

list of specs that match the query

query_by_spec_hash(hash_key, data=None)[source]

Get a spec for hash, and whether it’s installed upstream.

Returns

(bool, optional InstallRecord): bool tells us whether

the spec is installed upstream. Its InstallRecord is also returned if it’s installed at all; otherwise None.

Return type

(tuple)

query_local(*args, **kwargs)[source]

Query only the local Spack database.

This function doesn’t guarantee any sorting of the returned data for performance reason, since comparing specs for __lt__ may be an expensive operation.

Parameters
  • query_spec – queries iterate through specs in the database and return those that satisfy the supplied query_spec. If query_spec is any, This will match all specs in the database. If it is a spec, we’ll evaluate spec.satisfies(query_spec)

  • known (bool or None) – Specs that are “known” are those for which Spack can locate a package.py file – i.e., Spack “knows” how to install them. Specs that are unknown may represent packages that existed in a previous version of Spack, but have since either changed their name or been removed

  • installed (bool or InstallStatus or Iterable or None) – if True, includes only installed specs in the search; if False only missing specs, and if any, all specs in database. If an InstallStatus or iterable of InstallStatus, returns specs whose install status (installed, deprecated, or missing) matches (one of) the InstallStatus. (default: True)

  • explicit (bool or None) – A spec that was installed following a specific user request is marked as explicit. If instead it was pulled-in as a dependency of a user requested spec it’s considered implicit.

  • start_date (datetime.datetime or None) – filters the query discarding specs that have been installed before start_date.

  • end_date (datetime.datetime or None) – filters the query discarding specs that have been installed after end_date.

  • hashes (Container) – list or set of hashes that we can use to restrict the search

  • in_buildcache (bool or None) – Specs that are marked in this database as part of an associated binary cache are in_buildcache. All other specs are not. This field is used for querying mirror indices. Default is any.

Returns

list of specs that match the query

query_local_by_spec_hash(hash_key)[source]

Get a spec by hash in the local database

Returns

InstallRecord when installed

locally, otherwise None.

Return type

(InstallRecord or None)

query_one(query_spec, known=<built-in function any>, installed=True)[source]

Query for exactly one spec that matches the query spec.

Raises an assertion error if more than one spec matches the query. Returns None if no installed package matches.

read_transaction()[source]

Get a read lock context manager for use in a with block.

reindex(directory_layout)[source]

Build database index from scratch based on a directory layout.

Locks the DB if it isn’t locked already.

remove(spec_like, *args, **kwargs)[source]
specs_deprecated_by(spec)[source]

Return all specs deprecated in favor of the given spec

property unused_specs

Return all the specs that are currently installed but not needed at runtime to satisfy user’s requests.

Specs in the return list are those which are not either:
  1. Installed on an explicit user request

  2. Installed as a “run” or “link” dependency (even transitive) of a spec at point 1.

update_explicit(spec, explicit)[source]

Update the spec’s explicit state in the database.

Parameters
  • spec (spack.spec.Spec) – the spec whose install record is being updated

  • explicit (bool) – True if the package was requested explicitly by the user, False if it was pulled in as a dependency of an explicit package.

write_transaction()[source]

Get a write lock context manager for use in a with block.

class spack.database.ForbiddenLock[source]

Bases: object

exception spack.database.ForbiddenLockError(message, long_message=None)[source]

Bases: SpackError

Raised when an upstream DB attempts to acquire a lock

class spack.database.InstallRecord(spec, path, installed, ref_count=0, explicit=False, installation_time=None, deprecated_for=None, in_buildcache=False, origin=None)[source]

Bases: object

A record represents one installation in the DB.

The record keeps track of the spec for the installation, its install path, AND whether or not it is installed. We need the installed flag in case a user either:

  1. blew away a directory, or

  2. used spack uninstall -f to get rid of it

If, in either case, the package was removed but others still depend on it, we still need to track its spec, so we don’t actually remove from the database until a spec has no installed dependents left.

Parameters
  • spec (spack.spec.Spec) – spec tracked by the install record

  • path (str) – path where the spec has been installed

  • installed (bool) – whether or not the spec is currently installed

  • ref_count (int) – number of specs that depend on this one

  • explicit (bool or None) – whether or not this spec was explicitly installed, or pulled-in as a dependency of something else

  • installation_time (datetime.datetime or None) – time of the installation

classmethod from_dict(spec, dictionary)[source]
install_type_matches(installed)[source]
to_dict(include_fields=['spec', 'ref_count', 'path', 'installed', 'explicit', 'installation_time', 'deprecated_for'])[source]
class spack.database.InstallStatus[source]

Bases: str

class spack.database.InstallStatuses[source]

Bases: object

DEPRECATED = 'deprecated'
INSTALLED = 'installed'
MISSING = 'missing'
classmethod canonicalize(query_arg)[source]
exception spack.database.InvalidDatabaseVersionError(expected, found)[source]

Bases: SpackError

exception spack.database.MissingDependenciesError(message, long_message=None)[source]

Bases: SpackError

Raised when DB cannot find records for dependencies

exception spack.database.NonConcreteSpecAddError(message, long_message=None)[source]

Bases: SpackError

Raised when attemptint to add non-concrete spec to DB.

exception spack.database.UpstreamDatabaseLockingError(message, long_message=None)[source]

Bases: SpackError

Raised when an operation would need to lock an upstream database

spack.dependency module

Data structures that represent Spack’s dependency relationships.

class spack.dependency.Dependency(pkg, spec, type=('build', 'link'))[source]

Bases: object

Class representing metadata for a dependency on a package.

This class differs from spack.spec.DependencySpec because it represents metadata at the Package level. spack.spec.DependencySpec is a descriptor for an actual package configuration, while Dependency is a descriptor for a package’s dependency requirements.

A dependency is a requirement for a configuration of another package that satisfies a particular spec. The dependency can have types, which determine how that package configuration is required, e.g. whether it is required for building the package, whether it needs to be linked to, or whether it is needed at runtime so that Spack can call commands from it.

A package can also depend on another package with patches. This is for cases where the maintainers of one package also maintain special patches for their dependencies. If one package depends on another with patches, a special version of that dependency with patches applied will be built for use by the dependent package. The patches are included in the new version’s spec hash to differentiate it from unpatched versions of the same package, so that unpatched versions of the dependency package can coexist with the patched version.

merge(other)[source]

Merge constraints, deptypes, and patches of other into self.

property name

Get the name of the dependency package.

spack.dependency.all_deptypes = ('build', 'link', 'run', 'test')

The types of dependency relationships that Spack understands.

spack.dependency.canonical_deptype(deptype)[source]

Convert deptype to a canonical sorted tuple, or raise ValueError.

Parameters

deptype (str or list or tuple) – string representing dependency type, or a list/tuple of such strings. Can also be the builtin function all or the string ‘all’, which result in a tuple of all dependency types known to Spack.

spack.dependency.default_deptype = ('build', 'link')

Default dependency type if none is specified

spack.dependency.deptype_chars(*type_tuples)[source]

Create a string representing deptypes for many dependencies.

The string will be some subset of ‘blrt’, like ‘bl ‘, ‘b t’, or ‘ lr ‘ where each letter in ‘blrt’ stands for ‘build’, ‘link’, ‘run’, and ‘test’ (the dependency types).

For a single dependency, this just indicates that the dependency has the indicated deptypes. For a list of dependnecies, this shows whether ANY dpeendency in the list has the deptypes (so the deptypes are merged).

spack.directives module

This package contains directives that can be used within a package.

Directives are functions that can be called inside a package definition to modify the package, for example:

class OpenMpi(Package):

depends_on(“hwloc”) provides(“mpi”) …

provides and depends_on are spack directives.

The available directives are:

  • build_system

  • conflicts

  • depends_on

  • extends

  • patch

  • provides

  • resource

  • variant

  • version

exception spack.directives.DirectiveError(message, long_message=None)[source]

Bases: SpackError

This is raised when something is wrong with a package directive.

class spack.directives.DirectiveMeta(name, bases, attr_dict)[source]

Bases: type

Flushes the directives that were temporarily stored in the staging area into the package.

static directive(dicts=None)[source]

Decorator for Spack directives.

Spack directives allow you to modify a package while it is being defined, e.g. to add version or dependency information. Directives are one of the key pieces of Spack’s package “language”, which is embedded in python.

Here’s an example directive:

@directive(dicts='versions')
version(pkg, ...):
    ...

This directive allows you write:

class Foo(Package):
    version(...)

The @directive decorator handles a couple things for you:

  1. Adds the class scope (pkg) as an initial parameter when called, like a class method would. This allows you to modify a package from within a directive, while the package is still being defined.

  2. It automatically adds a dictionary called “versions” to the package so that you can refer to pkg.versions.

The (dicts='versions') part ensures that ALL packages in Spack will have a versions attribute after they’re constructed, and that if no directive actually modified it, it will just be an empty dict.

This is just a modular way to add storage attributes to the Package class, and it’s how Spack gets information from the packages to the core.

static pop_from_context()[source]

Pop the last constraint from the context

static push_to_context(when_spec)[source]

Add a spec to the context constraints.

spack.directives.build_system(*values, **kwargs)[source]
spack.directives.conflicts(conflict_spec, when=None, msg=None)[source]

Allows a package to define a conflict.

Currently, a “conflict” is a concretized configuration that is known to be non-valid. For example, a package that is known not to be buildable with intel compilers can declare:

conflicts('%intel')

To express the same constraint only when the ‘foo’ variant is activated:

conflicts('%intel', when='+foo')
Parameters
  • conflict_spec (spack.spec.Spec) – constraint defining the known conflict

  • when (spack.spec.Spec) – optional constraint that triggers the conflict

  • msg (str) – optional user defined message

spack.directives.depends_on(spec, when=None, type=('build', 'link'), patches=None)[source]

Creates a dict of deps with specs defining when they apply.

Parameters
  • spec (spack.spec.Spec or str) – the package and constraints depended on

  • when (spack.spec.Spec or str) – when the dependent satisfies this, it has the dependency represented by spec

  • type (str or tuple) – str or tuple of legal Spack deptypes

  • patches (Callable or list) – single result of patch() directive, a str to be passed to patch, or a list of these

This directive is to be used inside a Package definition to declare that the package requires other packages to be built first. @see The section “Dependency specs” in the Spack Packaging Guide.

spack.directives.extends(spec, type=('build', 'run'), **kwargs)[source]

Same as depends_on, but also adds this package to the extendee list.

keyword arguments can be passed to extends() so that extension packages can pass parameters to the extendee’s extension mechanism.

spack.directives.patch(url_or_filename, level=1, when=None, working_dir='.', **kwargs)[source]

Packages can declare patches to apply to source. You can optionally provide a when spec to indicate that a particular patch should only be applied when the package’s spec meets certain conditions (e.g. a particular version).

Parameters
  • url_or_filename (str) – url or relative filename of the patch

  • level (int) – patch level (as in the patch shell command)

  • when (spack.spec.Spec) – optional anonymous spec that specifies when to apply the patch

  • working_dir (str) – dir to change to before applying

Keyword Arguments
  • sha256 (str) – sha256 sum of the patch, used to verify the patch (only required for URL patches)

  • archive_sha256 (str) – sha256 sum of the archive, if the patch is compressed (only required for compressed URL patches)

spack.directives.provides(*specs, **kwargs)[source]

Allows packages to provide a virtual dependency. If a package provides ‘mpi’, other packages can declare that they depend on “mpi”, and spack can use the providing package to satisfy the dependency.

spack.directives.resource(**kwargs)[source]

Define an external resource to be fetched and staged when building the package. Based on the keywords present in the dictionary the appropriate FetchStrategy will be used for the resource. Resources are fetched and staged in their own folder inside spack stage area, and then moved into the stage area of the package that needs them.

List of recognized keywords:

  • ‘when’ : (optional) represents the condition upon which the resource is needed

  • ‘destination’ : (optional) path where to move the resource. This path must be relative to the main package stage area.

  • ‘placement’ : (optional) gives the possibility to fine tune how the resource is moved into the main package stage area.

spack.directives.variant(name, default=None, description='', values=None, multi=None, validator=None, when=None, sticky=False)[source]

Define a variant for the package. Packager can specify a default value as well as a text description.

Parameters
  • name (str) – name of the variant

  • default (str or bool) – default value for the variant, if not specified otherwise the default will be False for a boolean variant and ‘nothing’ for a multi-valued variant

  • description (str) – description of the purpose of the variant

  • values (tuple or Callable) – either a tuple of strings containing the allowed values, or a callable accepting one value and returning True if it is valid

  • multi (bool) – if False only one value per spec is allowed for this variant

  • validator (Callable) – optional group validator to enforce additional logic. It receives the package name, the variant name and a tuple of values and should raise an instance of SpackError if the group doesn’t meet the additional constraints

  • when (spack.spec.Spec, bool) – optional condition on which the variant applies

  • sticky (bool) – the variant should not be changed by the concretizer to find a valid concrete spec.

Raises

DirectiveError – if arguments passed to the directive are invalid

spack.directives.version(ver, checksum=None, **kwargs)[source]

Adds a version and, if appropriate, metadata for fetching its code.

The version directives are aggregated into a versions dictionary attribute with Version keys and metadata values, where the metadata is stored as a dictionary of kwargs.

The dict of arguments is turned into a valid fetch strategy for code packages later. See spack.fetch_strategy.for_package_version().

Keyword Arguments

deprecated (bool) – whether or not this version is deprecated

spack.directory_layout module

class spack.directory_layout.DirectoryLayout(root, **kwargs)[source]

Bases: object

A directory layout is used to associate unique paths with specs. Different installations are going to want different layouts for their install, and they can use this to customize the nesting structure of spack installs. The default layout is:

  • <install root>/

    • <platform-os-target>/

      • <compiler>-<compiler version>/

        • <name>-<version>-<hash>

The hash here is a SHA-1 hash for the full DAG plus the build spec.

The installation directory projections can be modified with the projections argument.

all_deprecated_specs()[source]
all_specs()[source]
build_packages_path(spec)[source]
create_install_directory(spec)[source]
deprecated_file_path(deprecated_spec, deprecator_spec=None)[source]

Gets full path to spec file for deprecated spec

If the deprecator_spec is provided, use that. Otherwise, assume deprecated_spec is already deprecated and its prefix links to the prefix of its deprecator.

disable_upstream_check()[source]
ensure_installed(spec)[source]

Throws InconsistentInstallDirectoryError if: 1. spec prefix does not exist 2. spec prefix does not contain a spec file, or 3. We read a spec with the wrong DAG hash out of an existing install directory.

env_metadata_path(spec)[source]
property hidden_file_regexes
metadata_path(spec)[source]
path_for_spec(spec)[source]

Return absolute path from the root to a directory for the spec.

read_spec(path)[source]

Read the contents of a file and parse them as a spec

relative_path_for_spec(spec)[source]
remove_install_directory(spec, deprecated=False)[source]

Removes a prefix and any empty parent directories from the root. Raised RemoveFailedError if something goes wrong.

spec_file_path(spec)[source]

Gets full path to spec file

specs_by_hash()[source]
write_host_environment(spec)[source]

The host environment is a json file with os, kernel, and spack versioning. We use it in the case that an analysis later needs to easily access this information.

write_spec(spec, path)[source]

Write a spec out to a file.

exception spack.directory_layout.DirectoryLayoutError(message, long_msg=None)[source]

Bases: SpackError

Superclass for directory layout errors.

exception spack.directory_layout.ExtensionAlreadyInstalledError(spec, ext_spec)[source]

Bases: DirectoryLayoutError

Raised when an extension is added to a package that already has it.

exception spack.directory_layout.ExtensionConflictError(spec, ext_spec, conflict)[source]

Bases: DirectoryLayoutError

Raised when an extension is added to a package that already has it.

exception spack.directory_layout.InconsistentInstallDirectoryError(message, long_msg=None)[source]

Bases: DirectoryLayoutError

Raised when a package seems to be installed to the wrong place.

exception spack.directory_layout.InvalidDirectoryLayoutParametersError(message, long_msg=None)[source]

Bases: DirectoryLayoutError

Raised when a invalid directory layout parameters are supplied

exception spack.directory_layout.InvalidExtensionSpecError(message, long_msg=None)[source]

Bases: DirectoryLayoutError

Raised when an extension file has a bad spec in it.

exception spack.directory_layout.RemoveFailedError(installed_spec, prefix, error)[source]

Bases: DirectoryLayoutError

Raised when a DirectoryLayout cannot remove an install prefix.

exception spack.directory_layout.SpecReadError(message, long_msg=None)[source]

Bases: DirectoryLayoutError

Raised when directory layout can’t read a spec.

spack.error module

exception spack.error.NoHeadersError(message, long_message=None)[source]

Bases: SpackError

Raised when package headers are requested but cannot be found

exception spack.error.NoLibrariesError(message_or_name, prefix=None)[source]

Bases: SpackError

Raised when package libraries are requested but cannot be found

exception spack.error.SpackError(message, long_message=None)[source]

Bases: Exception

This is the superclass for all Spack errors. Subclasses can be found in the modules they have to do with.

die()[source]
property long_message
print_context()[source]

Print extended debug information about this exception.

This is usually printed when the top-level Spack error handler calls die(), but it can be called separately beforehand if a lower-level error handler needs to print error context and continue without raising the exception to the top level.

exception spack.error.SpecError(message, long_message=None)[source]

Bases: SpackError

Superclass for all errors that occur while constructing specs.

exception spack.error.UnsatisfiableSpecError(provided, required, constraint_type)[source]

Bases: SpecError

Raised when a spec conflicts with package constraints.

For original concretizer, provide the requirement that was violated when raising.

exception spack.error.UnsupportedPlatformError(message)[source]

Bases: SpackError

Raised by packages when a platform is not supported

spack.error.debug = 0

at what level we should write stack traces or short error messages this is module-scoped because it needs to be set very early

spack.extensions module

Service functions and classes to implement the hooks for Spack’s command extensions.

exception spack.extensions.CommandNotFoundError(cmd_name)[source]

Bases: SpackError

Exception class thrown when a requested command is not recognized as such.

exception spack.extensions.ExtensionNamingError(path)[source]

Bases: SpackError

Exception class thrown when a configured extension does not follow the expected naming convention.

spack.extensions.extension_name(path)[source]

Returns the name of the extension in the path passed as argument.

Parameters

path (str) – path where the extension resides

Returns

The extension name.

Raises

ExtensionNamingError – if path does not match the expected format for a Spack command extension.

spack.extensions.get_command_paths()[source]

Return the list of paths where to search for command files.

spack.extensions.get_extension_paths()[source]

Return the list of canonicalized extension paths from config:extensions.

spack.extensions.get_module(cmd_name)[source]

Imports the extension module for a particular command name and returns it.

Parameters

cmd_name (str) – name of the command for which to get a module (contains -, not _).

spack.extensions.get_template_dirs()[source]

Returns the list of directories where to search for templates in extensions.

spack.extensions.load_command_extension(command, path)[source]

Loads a command extension from the path passed as argument.

Parameters
  • command (str) – name of the command (contains -, not _).

  • path (str) – base path of the command extension

Returns

A valid module if found and loadable; None if not found. Module

loading exceptions are passed through.

spack.extensions.path_for_extension(target_name, *paths)[source]

Return the test root dir for a given extension.

Parameters
  • target_name (str) – name of the extension to test

  • *paths – paths where the extensions reside

Returns

Root directory where tests should reside or None

spack.fetch_strategy module

Fetch strategies are used to download source code into a staging area in order to build it. They need to define the following methods:

  • fetch()

    This should attempt to download/check out source from somewhere.

  • check()

    Apply a checksum to the downloaded source code, e.g. for an archive. May not do anything if the fetch method was safe to begin with.

  • expand()

    Expand (e.g., an archive) downloaded file to source, with the standard stage source path as the destination directory.

  • reset()

    Restore original state of downloaded code. Used by clean commands. This may just remove the expanded source and re-expand an archive, or it may run something like git reset –hard.

  • archive()

    Archive a source directory, e.g. for creating a mirror.

class spack.fetch_strategy.BundleFetchStrategy(**kwargs)[source]

Bases: FetchStrategy

Fetch strategy associated with bundle, or no-code, packages.

Having a basic fetch strategy is a requirement for executing post-install hooks. Consequently, this class provides the API but does little more than log messages.

TODO: Remove this class by refactoring resource handling and the link between composite stages and composite fetch strategies (see #11981).

property cachable

Report False as there is no code to cache.

fetch()[source]

Simply report success – there is no code to fetch.

mirror_id()[source]

BundlePackages don’t have a mirror id.

source_id()[source]

BundlePackages don’t have a source id.

url_attr = ''

There is no associated URL keyword in version() for no-code packages but this property is required for some strategy-related functions (e.g., check_pkg_attributes).

class spack.fetch_strategy.CacheURLFetchStrategy(url=None, checksum=None, **kwargs)[source]

Bases: URLFetchStrategy

The resource associated with a cache URL may be out of date.

fetch()[source]

Fetch source code archive or repo.

Returns

True on success, False on failure.

Return type

bool

exception spack.fetch_strategy.ChecksumError(message, long_message=None)[source]

Bases: FetchError

Raised when archive fails to checksum.

class spack.fetch_strategy.CvsFetchStrategy(**kwargs)[source]

Bases: VCSFetchStrategy

Fetch strategy that gets source code from a CVS repository.

Use like this in a package:

version(‘name’,

cvs=’:pserver:anonymous@www.example.com:/cvsroot%module=modulename’)

Optionally, you can provide a branch and/or a date for the URL:

version(‘name’,

cvs=’:pserver:anonymous@www.example.com:/cvsroot%module=modulename’, branch=’branchname’, date=’date’)

Repositories are checked out into the standard stage source path directory.

archive(destination)[source]

Create an archive of the downloaded data for a mirror.

For downloaded files, this should preserve the checksum of the original file. For repositories, it should just create an expandable tarball out of the downloaded repository.

property cachable

Whether fetcher is capable of caching the resource it retrieves.

This generally is determined by whether the resource is identifiably associated with a specific package version.

Returns

True if can cache, False otherwise.

Return type

bool

property cvs
fetch()[source]

Fetch source code archive or repo.

Returns

True on success, False on failure.

Return type

bool

mirror_id()[source]

This is a unique ID for a source that is intended to help identify reuse of resources across packages.

It is unique like source-id, but it does not include the package name and is not necessarily easy for a human to create themselves.

optional_attrs = ['branch', 'date']
reset()[source]

Revert to freshly downloaded state.

For archive files, this may just re-expand the archive.

source_id()[source]

A unique ID for the source.

It is intended that a human could easily generate this themselves using the information available to them in the Spack package.

The returned value is added to the content which determines the full hash for a package using str().

url_attr = 'cvs'

The URL attribute must be specified either at the package class level, or as a keyword argument to version(). It is used to distinguish fetchers for different versions in the package DSL.

exception spack.fetch_strategy.ExtrapolationError(message, long_message=None)[source]

Bases: FetchError

Raised when we can’t extrapolate a version for a package.

exception spack.fetch_strategy.FailedDownloadError(url, msg='')[source]

Bases: FetchError

Raised when a download fails.

class spack.fetch_strategy.FetchStrategy(**kwargs)[source]

Bases: object

Superclass of all fetch strategies.

archive(destination)[source]

Create an archive of the downloaded data for a mirror.

For downloaded files, this should preserve the checksum of the original file. For repositories, it should just create an expandable tarball out of the downloaded repository.

property cachable

Whether fetcher is capable of caching the resource it retrieves.

This generally is determined by whether the resource is identifiably associated with a specific package version.

Returns

True if can cache, False otherwise.

Return type

bool

check()[source]

Checksum the archive fetched by this FetchStrategy.

expand()[source]

Expand the downloaded archive into the stage source path.

fetch()[source]

Fetch source code archive or repo.

Returns

True on success, False on failure.

Return type

bool

classmethod matches(args)[source]

Predicate that matches fetch strategies to arguments of the version directive.

Parameters

args – arguments of the version directive

mirror_id()[source]

This is a unique ID for a source that is intended to help identify reuse of resources across packages.

It is unique like source-id, but it does not include the package name and is not necessarily easy for a human to create themselves.

optional_attrs = []
reset()[source]

Revert to freshly downloaded state.

For archive files, this may just re-expand the archive.

set_package(package)[source]
source_id()[source]

A unique ID for the source.

It is intended that a human could easily generate this themselves using the information available to them in the Spack package.

The returned value is added to the content which determines the full hash for a package using str().

url_attr = None

The URL attribute must be specified either at the package class level, or as a keyword argument to version(). It is used to distinguish fetchers for different versions in the package DSL.

class spack.fetch_strategy.FetchStrategyComposite[source]

Bases: Composite

Composite for a FetchStrategy object.

classmethod matches(args)

Predicate that matches fetch strategies to arguments of the version directive.

Parameters

args – arguments of the version directive

set_package(package)[source]
source_id()[source]
exception spack.fetch_strategy.FetcherConflict(message, long_message=None)[source]

Bases: FetchError

Raised for packages with invalid fetch attributes.

class spack.fetch_strategy.FsCache(root)[source]

Bases: object

destroy()[source]
fetcher(target_path, digest, **kwargs)[source]
store(fetcher, relative_dest)[source]
class spack.fetch_strategy.GCSFetchStrategy(*args, **kwargs)[source]

Bases: URLFetchStrategy

FetchStrategy that pulls from a GCS bucket.

fetch()[source]

Fetch source code archive or repo.

Returns

True on success, False on failure.

Return type

bool

url_attr = 'gs'

The URL attribute must be specified either at the package class level, or as a keyword argument to version(). It is used to distinguish fetchers for different versions in the package DSL.

class spack.fetch_strategy.GitFetchStrategy(**kwargs)[source]

Bases: VCSFetchStrategy

Fetch strategy that gets source code from a git repository. Use like this in a package:

version(‘name’, git=’https://github.com/project/repo.git’)

Optionally, you can provide a branch, or commit to check out, e.g.:

version(‘1.1’, git=’https://github.com/project/repo.git’, tag=’v1.1’)

You can use these three optional attributes in addition to git:

  • branch: Particular branch to build from (default is the

    repository’s default branch)

  • tag: Particular tag to check out

  • commit: Particular commit hash in the repo

Repositories are cloned into the standard stage source path directory.

archive(destination)[source]

Create an archive of the downloaded data for a mirror.

For downloaded files, this should preserve the checksum of the original file. For repositories, it should just create an expandable tarball out of the downloaded repository.

property cachable

Whether fetcher is capable of caching the resource it retrieves.

This generally is determined by whether the resource is identifiably associated with a specific package version.

Returns

True if can cache, False otherwise.

Return type

bool

clone(dest=None, commit=None, branch=None, tag=None, bare=False)[source]

Clone a repository to a path.

This method handles cloning from git, but does not require a stage.

Parameters
  • dest (str or None) – The path into which the code is cloned. If None, requires a stage and uses the stage’s source path.

  • commit (str or None) – A commit to fetch from the remote. Only one of commit, branch, and tag may be non-None.

  • branch (str or None) – A branch to fetch from the remote.

  • tag (str or None) – A tag to fetch from the remote.

  • bare (bool) – Execute a “bare” git clone (–bare option to git)

fetch()[source]

Fetch source code archive or repo.

Returns

True on success, False on failure.

Return type

bool

property git
property git_version
git_version_re = 'git version (\\S+)'
mirror_id()[source]

This is a unique ID for a source that is intended to help identify reuse of resources across packages.

It is unique like source-id, but it does not include the package name and is not necessarily easy for a human to create themselves.

optional_attrs = ['tag', 'branch', 'commit', 'submodules', 'get_full_repo', 'submodules_delete']
protocol_supports_shallow_clone()[source]

Shallow clone operations (–depth #) are not supported by the basic HTTP protocol or by no-protocol file specifications. Use (e.g.) https:// or file:// instead.

reset()[source]

Revert to freshly downloaded state.

For archive files, this may just re-expand the archive.

source_id()[source]

A unique ID for the source.

It is intended that a human could easily generate this themselves using the information available to them in the Spack package.

The returned value is added to the content which determines the full hash for a package using str().

url_attr = 'git'

The URL attribute must be specified either at the package class level, or as a keyword argument to version(). It is used to distinguish fetchers for different versions in the package DSL.

static version_from_git(git_exe)[source]

Given a git executable, return the Version (this will fail if the output cannot be parsed into a valid Version).

class spack.fetch_strategy.GoFetchStrategy(**kwargs)[source]

Bases: VCSFetchStrategy

Fetch strategy that employs the go get infrastructure.

Use like this in a package:

version(‘name’,

go=’github.com/monochromegane/the_platinum_searcher/…’)

Go get does not natively support versions, they can be faked with git.

The fetched source will be moved to the standard stage sourcepath directory during the expand step.

archive(destination)[source]

Create an archive of the downloaded data for a mirror.

For downloaded files, this should preserve the checksum of the original file. For repositories, it should just create an expandable tarball out of the downloaded repository.

expand()[source]

Expand the downloaded archive into the stage source path.

fetch()[source]

Fetch source code archive or repo.

Returns

True on success, False on failure.

Return type

bool

property go
property go_version
reset()[source]

Revert to freshly downloaded state.

For archive files, this may just re-expand the archive.

url_attr = 'go'

The URL attribute must be specified either at the package class level, or as a keyword argument to version(). It is used to distinguish fetchers for different versions in the package DSL.

class spack.fetch_strategy.HgFetchStrategy(**kwargs)[source]

Bases: VCSFetchStrategy

Fetch strategy that gets source code from a Mercurial repository. Use like this in a package:

version(‘name’, hg=’https://jay.grs.rwth-aachen.de/hg/lwm2’)

Optionally, you can provide a branch, or revision to check out, e.g.:

version(‘torus’,

hg=’https://jay.grs.rwth-aachen.de/hg/lwm2’, branch=’torus’)

You can use the optional ‘revision’ attribute to check out a branch, tag, or particular revision in hg. To prevent non-reproducible builds, using a moving target like a branch is discouraged.

  • revision: Particular revision, branch, or tag.

Repositories are cloned into the standard stage source path directory.

archive(destination)[source]

Create an archive of the downloaded data for a mirror.

For downloaded files, this should preserve the checksum of the original file. For repositories, it should just create an expandable tarball out of the downloaded repository.

property cachable

Whether fetcher is capable of caching the resource it retrieves.

This generally is determined by whether the resource is identifiably associated with a specific package version.

Returns

True if can cache, False otherwise.

Return type

bool

fetch()[source]

Fetch source code archive or repo.

Returns

True on success, False on failure.

Return type

bool

property hg

Returns: Executable: the hg executable

mirror_id()[source]

This is a unique ID for a source that is intended to help identify reuse of resources across packages.

It is unique like source-id, but it does not include the package name and is not necessarily easy for a human to create themselves.

optional_attrs = ['revision']
reset()[source]

Revert to freshly downloaded state.

For archive files, this may just re-expand the archive.

source_id()[source]

A unique ID for the source.

It is intended that a human could easily generate this themselves using the information available to them in the Spack package.

The returned value is added to the content which determines the full hash for a package using str().

url_attr = 'hg'

The URL attribute must be specified either at the package class level, or as a keyword argument to version(). It is used to distinguish fetchers for different versions in the package DSL.

exception spack.fetch_strategy.InvalidArgsError(pkg=None, version=None, **args)[source]

Bases: FetchError

Raised when a version can’t be deduced from a set of arguments.

exception spack.fetch_strategy.NoArchiveFileError(message, long_message=None)[source]

Bases: FetchError

Raised when an archive file is expected but none exists.

exception spack.fetch_strategy.NoCacheError(message, long_message=None)[source]

Bases: FetchError

Raised when there is no cached archive for a package.

exception spack.fetch_strategy.NoDigestError(message, long_message=None)[source]

Bases: FetchError

Raised after attempt to checksum when URL has no digest.

exception spack.fetch_strategy.NoStageError(method)[source]

Bases: FetchError

Raised when fetch operations are called before set_stage().

class spack.fetch_strategy.S3FetchStrategy(*args, **kwargs)[source]

Bases: URLFetchStrategy

FetchStrategy that pulls from an S3 bucket.

fetch()[source]

Fetch source code archive or repo.

Returns

True on success, False on failure.

Return type

bool

url_attr = 's3'

The URL attribute must be specified either at the package class level, or as a keyword argument to version(). It is used to distinguish fetchers for different versions in the package DSL.

class spack.fetch_strategy.SvnFetchStrategy(**kwargs)[source]

Bases: VCSFetchStrategy

Fetch strategy that gets source code from a subversion repository.

Use like this in a package:

version(‘name’, svn=’http://www.example.com/svn/trunk’)

Optionally, you can provide a revision for the URL:

version(‘name’, svn=’http://www.example.com/svn/trunk’,

revision=’1641’)

Repositories are checked out into the standard stage source path directory.

archive(destination)[source]

Create an archive of the downloaded data for a mirror.

For downloaded files, this should preserve the checksum of the original file. For repositories, it should just create an expandable tarball out of the downloaded repository.

property cachable

Whether fetcher is capable of caching the resource it retrieves.

This generally is determined by whether the resource is identifiably associated with a specific package version.

Returns

True if can cache, False otherwise.

Return type

bool

fetch()[source]

Fetch source code archive or repo.

Returns

True on success, False on failure.

Return type

bool

mirror_id()[source]

This is a unique ID for a source that is intended to help identify reuse of resources across packages.

It is unique like source-id, but it does not include the package name and is not necessarily easy for a human to create themselves.

optional_attrs = ['revision']
reset()[source]

Revert to freshly downloaded state.

For archive files, this may just re-expand the archive.

source_id()[source]

A unique ID for the source.

It is intended that a human could easily generate this themselves using the information available to them in the Spack package.

The returned value is added to the content which determines the full hash for a package using str().

property svn
url_attr = 'svn'

The URL attribute must be specified either at the package class level, or as a keyword argument to version(). It is used to distinguish fetchers for different versions in the package DSL.

class spack.fetch_strategy.URLFetchStrategy(url=None, checksum=None, **kwargs)[source]

Bases: FetchStrategy

URLFetchStrategy pulls source code from a URL for an archive, check the archive against a checksum, and decompresses the archive.

The destination for the resulting file(s) is the standard stage path.

archive(destination)[source]

Just moves this archive to the destination.

property archive_file

Path to the source archive within this stage directory.

property cachable

Whether fetcher is capable of caching the resource it retrieves.

This generally is determined by whether the resource is identifiably associated with a specific package version.

Returns

True if can cache, False otherwise.

Return type

bool

property candidate_urls
check()[source]

Check the downloaded archive against a checksum digest. No-op if this stage checks code out of a repository.

property curl
expand()[source]

Expand the downloaded archive into the stage source path.

fetch()[source]

Fetch source code archive or repo.

Returns

True on success, False on failure.

Return type

bool

mirror_id()[source]

This is a unique ID for a source that is intended to help identify reuse of resources across packages.

It is unique like source-id, but it does not include the package name and is not necessarily easy for a human to create themselves.

optional_attrs = ['md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512', 'checksum']
reset()[source]

Removes the source path if it exists, then re-expands the archive.

source_id()[source]

A unique ID for the source.

It is intended that a human could easily generate this themselves using the information available to them in the Spack package.

The returned value is added to the content which determines the full hash for a package using str().

url_attr = 'url'

The URL attribute must be specified either at the package class level, or as a keyword argument to version(). It is used to distinguish fetchers for different versions in the package DSL.

class spack.fetch_strategy.VCSFetchStrategy(**kwargs)[source]

Bases: FetchStrategy

Superclass for version control system fetch strategies.

Like all fetchers, VCS fetchers are identified by the attributes passed to the version directive. The optional_attrs for a VCS fetch strategy represent types of revisions, e.g. tags, branches, commits, etc.

The required attributes (git, svn, etc.) are used to specify the URL and to distinguish a VCS fetch strategy from a URL fetch strategy.

archive(destination, **kwargs)[source]

Create an archive of the downloaded data for a mirror.

For downloaded files, this should preserve the checksum of the original file. For repositories, it should just create an expandable tarball out of the downloaded repository.

check()[source]

Checksum the archive fetched by this FetchStrategy.

expand()[source]

Expand the downloaded archive into the stage source path.

spack.fetch_strategy.all_strategies = [<class 'spack.fetch_strategy.BundleFetchStrategy'>, <class 'spack.fetch_strategy.URLFetchStrategy'>, <class 'spack.fetch_strategy.CacheURLFetchStrategy'>, <class 'spack.fetch_strategy.GoFetchStrategy'>, <class 'spack.fetch_strategy.GitFetchStrategy'>, <class 'spack.fetch_strategy.CvsFetchStrategy'>, <class 'spack.fetch_strategy.SvnFetchStrategy'>, <class 'spack.fetch_strategy.HgFetchStrategy'>, <class 'spack.fetch_strategy.S3FetchStrategy'>, <class 'spack.fetch_strategy.GCSFetchStrategy'>]

List of all fetch strategies, created by FetchStrategy metaclass.

spack.fetch_strategy.check_pkg_attributes(pkg)[source]

Find ambiguous top-level fetch attributes in a package.

Currently this only ensures that two or more VCS fetch strategies are not specified at once.

spack.fetch_strategy.fetcher(cls)[source]

Decorator used to register fetch strategies.

spack.fetch_strategy.for_package_version(pkg, version)[source]

Determine a fetch strategy based on the arguments supplied to version() in the package description.

spack.fetch_strategy.from_kwargs(**kwargs)[source]

Construct an appropriate FetchStrategy from the given keyword arguments.

Parameters

**kwargs – dictionary of keyword arguments, e.g. from a version() directive in a package.

Returns

The fetch strategy that matches the args, based

on attribute names (e.g., git, hg, etc.)

Return type

Callable

Raises

spack.util.web.FetchError – If no fetch_strategy matches the args.

spack.fetch_strategy.from_list_url(pkg)[source]

If a package provides a URL which lists URLs for resources by version, this can can create a fetcher for a URL discovered for the specified package’s version.

spack.fetch_strategy.from_url(url)[source]

Given a URL, find an appropriate fetch strategy for it. Currently just gives you a URLFetchStrategy that uses curl.

TODO: make this return appropriate fetch strategies for other

types of URLs.

spack.fetch_strategy.from_url_scheme(url, *args, **kwargs)[source]

Finds a suitable FetchStrategy by matching its url_attr with the scheme in the given url.

spack.fetch_strategy.stable_target(fetcher)[source]

Returns whether the fetcher target is expected to have a stable checksum. This is only true if the target is a preexisting archive file.

spack.fetch_strategy.warn_content_type_mismatch(subject, content_type='HTML')[source]

spack.filesystem_view module

class spack.filesystem_view.FilesystemView(root, layout, **kwargs)[source]

Bases: object

Governs a filesystem view that is located at certain root-directory.

Packages are linked from their install directories into a common file hierachy.

In distributed filesystems, loading each installed package seperately can lead to slow-downs due to too many directories being traversed. This can be circumvented by loading all needed modules into a common directory structure.

add_specs(*specs, **kwargs)[source]

Add given specs to view.

Should accept with_dependencies as keyword argument (default True) to indicate wether or not dependencies should be activated as well.

Should except an exclude keyword argument containing a list of regexps that filter out matching spec names.

This method should make use of activate_standalone.

add_standalone(spec)[source]

Add (link) a standalone package into this view.

check_added(spec)[source]

Check if the given concrete spec is active in this view.

get_all_specs()[source]

Get all specs currently active in this view.

get_projection_for_spec(spec)[source]

Get the projection in this view for a spec.

get_spec(spec)[source]

Return the actual spec linked in this view (i.e. do not look it up in the database by name).

spec can be a name or a spec from which the name is extracted.

As there can only be a single version active for any spec the name is enough to identify the spec in the view.

If no spec is present, returns None.

print_status(*specs, **kwargs)[source]
Print a short summary about the given specs, detailing whether..
  • ..they are active in the view.

  • ..they are active but the activated version differs.

  • ..they are not activte in the view.

Takes with_dependencies keyword argument so that the status of dependencies is printed as well.

remove_specs(*specs, **kwargs)[source]

Removes given specs from view.

Should accept with_dependencies as keyword argument (default True) to indicate wether or not dependencies should be deactivated as well.

Should accept with_dependents as keyword argument (default True) to indicate wether or not dependents on the deactivated specs should be removed as well.

Should except an exclude keyword argument containing a list of regexps that filter out matching spec names.

This method should make use of deactivate_standalone.

remove_standalone(spec)[source]

Remove (unlink) a standalone package from this view.

class spack.filesystem_view.YamlFilesystemView(root, layout, **kwargs)[source]

Bases: FilesystemView

Filesystem view to work with a yaml based directory layout.

add_specs(*specs, **kwargs)[source]

Add given specs to view.

Should accept with_dependencies as keyword argument (default True) to indicate wether or not dependencies should be activated as well.

Should except an exclude keyword argument containing a list of regexps that filter out matching spec names.

This method should make use of activate_standalone.

add_standalone(spec)[source]

Add (link) a standalone package into this view.

check_added(spec)[source]

Check if the given concrete spec is active in this view.

clean()[source]
get_all_specs()[source]

Get all specs currently active in this view.

get_conflicts(*specs)[source]

Return list of tuples (<spec>, <spec in view>) where the spec active in the view differs from the one to be activated.

get_path_meta_folder(spec)[source]

Get path to meta folder for either spec or spec name.

get_projection_for_spec(spec)[source]

Return the projection for a spec in this view.

Relies on the ordering of projections to avoid ambiguity.

get_spec(spec)[source]

Return the actual spec linked in this view (i.e. do not look it up in the database by name).

spec can be a name or a spec from which the name is extracted.

As there can only be a single version active for any spec the name is enough to identify the spec in the view.

If no spec is present, returns None.

merge(spec, ignore=None)[source]
print_conflict(spec_active, spec_specified, level='error')[source]

Singular print function for spec conflicts.

print_status(*specs, **kwargs)[source]
Print a short summary about the given specs, detailing whether..
  • ..they are active in the view.

  • ..they are active but the activated version differs.

  • ..they are not activte in the view.

Takes with_dependencies keyword argument so that the status of dependencies is printed as well.

read_projections()[source]
remove_files(files)[source]
remove_specs(*specs, **kwargs)[source]

Removes given specs from view.

Should accept with_dependencies as keyword argument (default True) to indicate wether or not dependencies should be deactivated as well.

Should accept with_dependents as keyword argument (default True) to indicate wether or not dependents on the deactivated specs should be removed as well.

Should except an exclude keyword argument containing a list of regexps that filter out matching spec names.

This method should make use of deactivate_standalone.

remove_standalone(spec)[source]

Remove (unlink) a standalone package from this view.

unmerge(spec, ignore=None)[source]
write_projections()[source]

spack.gcs_handler module

spack.gcs_handler.gcs_open(req, *args, **kwargs)[source]

Open a reader stream to a blob object on GCS

spack.graph module

Functions for graphing DAGs of dependencies.

This file contains code for graphing DAGs of software packages (i.e. Spack specs). There are two main functions you probably care about:

graph_ascii() will output a colored graph of a spec in ascii format, kind of like the graph git shows with “git log –graph”, e.g.:

o  mpileaks
|\
| |\
| o |  callpath
|/| |
| |\|
| |\ \
| | |\ \
| | | | o  adept-utils
| |_|_|/|
|/| | | |
o | | | |  mpi
 / / / /
| | o |  dyninst
| |/| |
|/|/| |
| | |/
| o |  libdwarf
|/ /
o |  libelf
 /
o  boost

graph_dot() will output a graph of a spec (or multiple specs) in dot format.

Note that graph_ascii assumes a single spec while graph_dot can take a number of specs as input.

class spack.graph.AsciiGraph[source]

Bases: object

write(spec, color=None, out=None)[source]

Write out an ascii graph of the provided spec.

Arguments: spec – spec to graph. This only handles one spec at a time.

Optional arguments:

out – file object to write out to (default is sys.stdout)

color – whether to write in color. Default is to autodetect

based on output file.

spack.graph.graph_ascii(spec, node='o', out=None, debug=False, indent=0, color=None, deptype='all')[source]
spack.graph.graph_dot(specs, deptype='all', static=False, out=None)[source]

Generate a graph in dot format of all provided specs.

Print out a dot formatted graph of all the dependencies between package. Output can be passed to graphviz, e.g.:

spack graph --dot qt | dot -Tpdf > spack-graph.pdf

spack.hash_types module

Definitions that control how Spack creates Spec hashes.

class spack.hash_types.SpecHashDescriptor(deptype, package_hash, name, override=None)[source]

Bases: object

This class defines how hashes are generated on Spec objects.

Spec hashes in Spack are generated from a serialized (e.g., with YAML) representation of the Spec graph. The representation may only include certain dependency types, and it may optionally include a canonicalized hash of the package.py for each node in the graph.

We currently use different hashes for different use cases.

property attr

Private attribute stored on spec

spack.hash_types.dag_hash = <spack.hash_types.SpecHashDescriptor object>

Spack’s deployment hash. Includes all inputs that can affect how a package is built.

spack.hash_types.package_hash = <spack.hash_types.SpecHashDescriptor object>

Package hash used as part of dag hash

spack.hash_types.process_hash = <spack.hash_types.SpecHashDescriptor object>

Hash descriptor used only to transfer a DAG, as is, across processes

spack.install_test module

exception spack.install_test.TestFailure(failures)[source]

Bases: SpackError

Raised when package tests have failed for an installation.

class spack.install_test.TestSuite(specs, alias=None)[source]

Bases: object

The class that manages specs for spack test run execution.

property content_hash

The hash used to uniquely identify the test suite.

property current_test_cache_dir

Path to the test stage directory where the current spec’s cached build-time files were automatically copied.

Returns

path to the current spec’s staged, cached build-time files.

Return type

str

Raises

TestSuiteSpecError – If there is no spec being tested

property current_test_data_dir

Path to the test stage directory where the current spec’s custom package (data) files were automatically copied.

Returns

path to the current spec’s staged, custom package (data) files

Return type

str

Raises

TestSuiteSpecError – If there is no spec being tested

ensure_stage()[source]

Ensure the test suite stage directory exists.

static from_dict(d)[source]

Instantiates a TestSuite based on a dictionary specs and an optional alias:

specs: list of the test suite’s specs in dictionary form alias: the test suite alias

Returns

Instance of TestSuite created from the specs

Return type

TestSuite

static from_file(filename)[source]

Instantiate a TestSuite using the specs and optional alias provided in the given file.

Parameters

filename (str) – The path to the JSON file containing the test suite specs and optional alias.

log_file_for_spec(spec)[source]

The test log file path for the provided spec.

Parameters

spec (spack.spec.Spec) – instance of the spec under test

Returns

the path to the spec’s log file

Return type

str

property name

The name (alias or, if none, hash) of the test suite.

property results_file

The path to the results summary file.

property stage

The root test suite stage directory.

test_dir_for_spec(spec)[source]

The path to the test stage directory for the provided spec.

Parameters

spec (spack.spec.Spec) – instance of the spec under test

Returns

the spec’s test stage directory path

Return type

str

classmethod test_log_name(spec)[source]

The standard log filename for a spec.

Parameters

spec (spack.spec.Spec) – instance of the spec under test

Returns

the spec’s log filename

Return type

str

classmethod test_pkg_id(spec)[source]

The standard install test package identifier.

Parameters

spec (spack.spec.Spec) – instance of the spec under test

Returns

the install test package identifier

Return type

str

tested_file_for_spec(spec)[source]

The test status file path for the spec.

Parameters

spec (spack.spec.Spec) – instance of the spec under test

Returns

the spec’s test status file path

Return type

str

classmethod tested_file_name(spec)[source]

The standard test status filename for the spec.

Parameters

spec (spack.spec.Spec) – instance of the spec under test

Returns

the spec’s test status filename

Return type

str

to_dict()[source]

Build a dictionary for the test suite.

Returns

The dictionary contains entries for up to two keys:

specs: list of the test suite’s specs in dictionary form alias: the alias, or name, given to the test suite if provided

Return type

dict

write_reproducibility_data()[source]
write_test_result(spec, result)[source]

Write the spec’s test result to the test suite results file.

Parameters
  • spec (spack.spec.Spec) – instance of the spec under test

  • result (str) – result from the spec’s test execution (e.g, PASSED)

exception spack.install_test.TestSuiteFailure(num_failures)[source]

Bases: SpackError

Raised when one or more tests in a suite have failed.

exception spack.install_test.TestSuiteNameError(message, long_message=None)[source]

Bases: SpackError

Raised when there is an issue with the naming of the test suite.

exception spack.install_test.TestSuiteSpecError(message, long_message=None)[source]

Bases: SpackError

Raised when there is an issue associated with the spec being tested.

spack.install_test.get_all_test_suites()[source]

Retrieves all validly staged TestSuites

Returns

a list of TestSuite objects, which may be empty if there are none

Return type

list

spack.install_test.get_escaped_text_output(filename)[source]

Retrieve and escape the expected text output from the file

Parameters

filename (str) – path to the file

Returns

escaped text lines read from the file

Return type

list

spack.install_test.get_named_test_suites(name)[source]

Retrieves test suites with the provided name.

Returns

a list of matching TestSuite instances, which may be empty if none

Return type

list

Raises

TestSuiteNameError – If no name is provided

spack.install_test.get_test_stage_dir()[source]

Retrieves the config:test_stage path to the configured test stage root directory

Returns

absolute path to the configured test stage root or, if none,

the default test stage path

Return type

str

spack.install_test.get_test_suite(name)[source]

Ensure there is only one matching test suite with the provided name.

Returns

the name if one matching test suite, else None

Return type

str or None

Raises

TestSuiteNameError – If there is more than one matching TestSuite

spack.install_test.write_test_suite_file(suite)[source]

Write the test suite to its (JSON) lock file.

spack.install_test.write_test_summary(num_failed, num_skipped, num_untested, num_specs)[source]

Write a well formatted summary of the totals for each relevant status category.

spack.installer module

This module encapsulates package installation functionality.

The PackageInstaller coordinates concurrent builds of packages for the same Spack instance by leveraging the dependency DAG and file system locks. It also proceeds with the installation of non-dependent packages of failed dependencies in order to install as many dependencies of a package as possible.

Bottom-up traversal of the dependency DAG while prioritizing packages with no uninstalled dependencies allows multiple processes to perform concurrent builds of separate packages associated with a spec.

File system locks enable coordination such that no two processes attempt to build the same or a failed dependency package.

Failures to install dependency packages result in removal of their dependents’ build tasks from the current process. A failure file is also written (and locked) so that other processes can detect the failure and adjust their build tasks accordingly.

This module supports the coordination of local and distributed concurrent installations of packages in a Spack instance.

exception spack.installer.BadInstallPhase(pkg_name, phase)[source]

Bases: InstallError

Raised for an install phase option is not allowed for a package.

class spack.installer.BuildProcessInstaller(pkg, install_args)[source]

Bases: object

This class implements the part installation that happens in the child process.

run()[source]

Main entry point from build_process to kick off install in child.

class spack.installer.BuildRequest(pkg, install_args)[source]

Bases: object

Class for representing an installation request.

get_deptypes(pkg)[source]

Determine the required dependency types for the associated package.

Parameters

pkg (spack.package_base.PackageBase) – explicit or implicit package being installed

Returns

required dependency type(s) for the package

Return type

tuple

has_dependency(dep_id)[source]

Returns True if the package id represents a known dependency of the requested package, False otherwise.

run_tests(pkg)[source]

Determine if the tests should be run for the provided packages

Parameters

pkg (spack.package_base.PackageBase) – explicit or implicit package being installed

Returns

True if they should be run; False otherwise

Return type

bool

property spec

The specification associated with the package.

traverse_dependencies(spec=None, visited=None)[source]

Yield any dependencies of the appropriate type(s)

Yields

(Spec) The next child spec in the DAG

class spack.installer.BuildTask(pkg, request, compiler, start, attempts, status, installed)[source]

Bases: object

Class for representing the build task for a package.

add_dependent(pkg_id)[source]

Ensure the dependent package id is in the task’s list so it will be properly updated when this package is installed.

Parameters

pkg_id (str) – package identifier of the dependent package

property cache_only
property explicit

The package was explicitly requested by the user.

flag_installed(installed)[source]

Ensure the dependency is not considered to still be uninstalled.

Parameters

installed (list) – the identifiers of packages that have been installed so far

property is_root

The package was requested directly, but may or may not be explicit in an environment.

property key

The key is the tuple (# uninstalled dependencies, sequence).

next_attempt(installed)[source]

Create a new, updated task for the next installation attempt.

property priority

The priority is based on the remaining uninstalled dependencies.

property use_cache
exception spack.installer.ExternalPackageError(message, long_msg=None, pkg=None)[source]

Bases: InstallError

Raised by install() when a package is only for external use.

class spack.installer.InstallAction[source]

Bases: object

INSTALL = 1

Do a standard install

NONE = 0

Don’t perform an install

OVERWRITE = 2

Do an overwrite install

exception spack.installer.InstallError(message, long_msg=None, pkg=None)[source]

Bases: SpackError

Raised when something goes wrong during install or uninstall.

The error can be annotated with a pkg attribute to allow the caller to get the package for which the exception was raised.

exception spack.installer.InstallLockError(message, long_msg=None, pkg=None)[source]

Bases: InstallError

Raised during install when something goes wrong with package locking.

class spack.installer.OverwriteInstall(installer, database, task)[source]

Bases: object

install()[source]

Try to run the install task overwriting the package prefix. If this fails, try to recover the original install prefix. If that fails too, mark the spec as uninstalled. This function always the original install error if installation fails.

class spack.installer.PackageInstaller(installs=[])[source]

Bases: object

Class for managing the install process for a Spack instance based on a bottom-up DAG approach.

This installer can coordinate concurrent batch and interactive, local and distributed (on a shared file system) builds for the same Spack instance.

install()[source]

Install the requested package(s) and or associated dependencies.

spack.installer.STATUS_ADDED = 'queued'

Build status indicating task has been added.

spack.installer.STATUS_DEQUEUED = 'dequeued'

Build status indicating the task has been popped from the queue

spack.installer.STATUS_FAILED = 'failed'

Build status indicating the spec failed to install

spack.installer.STATUS_INSTALLED = 'installed'

Build status indicating the spec was sucessfully installed

spack.installer.STATUS_INSTALLING = 'installing'

Build status indicating the spec is being installed (possibly by another process)

spack.installer.STATUS_REMOVED = 'removed'

Build status indicating task has been removed (to maintain priority queue invariants).

class spack.installer.TermStatusLine(enabled)[source]

Bases: object

This class is used in distributed builds to inform the user that other packages are being installed by another process.

add(pkg_id)[source]

Add a package to the waiting list, and if it is new, update the status line.

clear()[source]

Clear the status line.

class spack.installer.TermTitle(pkg_count)[source]

Bases: object

next_pkg(pkg)[source]
set(text)[source]
exception spack.installer.UpstreamPackageError(message, long_msg=None, pkg=None)[source]

Bases: InstallError

Raised during install when something goes wrong with an upstream package.

spack.installer.build_process(pkg, install_args)[source]

Perform the installation/build of the package.

This runs in a separate child process, and has its own process and python module space set up by build_environment.start_build_process().

This essentially wraps an instance of BuildProcessInstaller so that we can more easily create one in a subprocess.

This function’s return value is returned to the parent process.

Parameters
spack.installer.clear_failures()[source]

Remove all failure tracking markers for the Spack instance.

spack.installer.combine_phase_logs(phase_log_files, log_path)[source]

Read set or list of logs and combine them into one file.

Each phase will produce it’s own log, so this function aims to cat all the separate phase log output files into the pkg.log_path. It is written generally to accept some list of files, and a log path to combine them to.

Parameters
  • phase_log_files (list) – a list or iterator of logs to combine

  • log_path (str) – the path to combine them to

spack.installer.dump_packages(spec, path)[source]

Dump all package information for a spec and its dependencies.

This creates a package repository within path for every namespace in the spec DAG, and fills the repos with package files and patch files for every node in the DAG.

Parameters
  • spec (spack.spec.Spec) – the Spack spec whose package information is to be dumped

  • path (str) – the path to the build packages directory

spack.installer.get_dependent_ids(spec)[source]

Return a list of package ids for the spec’s dependents

Parameters

spec (spack.spec.Spec) – Concretized spec

Returns

list of package ids

Return type

list

spack.installer.install_msg(name, pid)[source]

Colorize the name/id of the package being installed

Parameters
  • name (str) – Name/id of the package being installed

  • pid (int) – id of the installer process

Returns

Colorized installing message

Return type

str

spack.installer.log(pkg)[source]

Copy provenance into the install directory on success

Parameters

pkg (spack.package_base.PackageBase) – the package that was built and installed

spack.installer.package_id(pkg)[source]

A “unique” package identifier for installation purposes

The identifier is used to track build tasks, locks, install, and failure statuses.

The identifier needs to distinguish between combinations of compilers and packages for combinatorial environments.

Parameters

pkg (spack.package_base.PackageBase) – the package from which the identifier is derived

spack.main module

This is the implementation of the Spack command line executable.

In a normal Spack installation, this is invoked from the bin/spack script after the system path is set up.

spack.main.SHOW_BACKTRACE = False

Whether to print backtraces on error

class spack.main.SpackArgumentParser(prog=None, usage=None, description=None, epilog=None, parents=[], formatter_class=<class 'argparse.HelpFormatter'>, prefix_chars='-', fromfile_prefix_chars=None, argument_default=None, conflict_handler='error', add_help=True, allow_abbrev=True)[source]

Bases: ArgumentParser

add_command(cmd_name)[source]

Add one subcommand to this parser.

add_subparsers(**kwargs)[source]

Ensure that sensible defaults are propagated to subparsers

format_help(level='short')[source]
format_help_sections(level)[source]

Format help on sections for a particular verbosity level.

Parameters

level (str) – ‘short’ or ‘long’ (more commands shown for long)

class spack.main.SpackCommand(command_name, subprocess=False)[source]

Bases: object

Callable object that invokes a spack command (for testing).

Example usage:

install = SpackCommand('install')
install('-v', 'mpich')

Use this to invoke Spack commands directly from Python and check their output.

exception spack.main.SpackCommandError[source]

Bases: Exception

Raised when SpackCommand execution fails.

class spack.main.SpackHelpFormatter(prog, indent_increment=2, max_help_position=24, width=None)[source]

Bases: RawTextHelpFormatter

add_arguments(actions)[source]
spack.main.add_all_commands(parser)[source]

Add all spack subcommands to the parser.

spack.main.aliases = {'rm': 'remove'}

top-level aliases for Spack commands

spack.main.allows_unknown_args(command)[source]

Implements really simple argument injection for unknown arguments.

Commands may add an optional argument called “unknown args” to indicate they can handle unknonwn args, and we’ll pass the unknown args in.

spack.main.finish_parse_and_run(parser, cmd_name, env_format_error)[source]

Finish parsing after we know the command to run.

spack.main.get_version()[source]

Get a descriptive version of this instance of Spack.

Outputs ‘<PEP440 version> (<git commit sha>)’.

The commit sha is only added when available.

spack.main.index_commands()[source]

create an index of commands by section for this help level

spack.main.intro_by_level = {'long': 'Complete list of spack commands:', 'short': 'These are common spack commands:'}

intro text for help at different levels

spack.main.levels = ['short', 'long']

help levels in order of detail (i.e., number of commands shown)

spack.main.main(argv=None)[source]

This is the entry point for the Spack command.

main() itself is just an error handler – it handles errors for everything in Spack that makes it to the top level.

The logic is all in _main().

Parameters

argv (list or None) – command line arguments, NOT including the executable name. If None, parses from sys.argv.

spack.main.make_argument_parser(**kwargs)[source]

Create an basic argument parser without any subcommands added.

spack.main.options_by_level = {'long': 'all', 'short': ['h', 'k', 'V', 'color']}

control top-level spack options shown in basic vs. advanced help

spack.main.print_setup_info(*info)[source]

Print basic information needed by setup-env.[c]sh.

Parameters

info (list) – list of things to print: comma-separated list of ‘csh’, ‘sh’, or ‘modules’

This is in main.py to make it fast; the setup scripts need to invoke spack in login scripts, and it needs to be quick.

spack.main.required_command_properties = ['level', 'section', 'description']

Properties that commands are required to set.

spack.main.section_descriptions = {'admin': 'administration', 'basic': 'query packages', 'build': 'build packages', 'config': 'configuration', 'developer': 'developer', 'environment': 'environment', 'extensions': 'extensions', 'help': 'more help', 'packaging': 'create packages', 'system': 'system'}

Longer text for each section, to show in help

spack.main.section_order = {'basic': ['list', 'info', 'find'], 'build': ['fetch', 'stage', 'patch', 'configure', 'build', 'restage', 'install', 'uninstall', 'clean'], 'packaging': ['create', 'edit']}

preferential command order for some sections (e.g., build pipeline is in execution order, not alphabetical)

spack.main.send_warning_to_tty(message, *args)[source]

Redirects messages to tty.warn.

spack.main.set_working_dir()[source]

Change the working directory to getcwd, or spack prefix if no cwd.

spack.main.setup_main_options(args)[source]

Configure spack globals based on the basic options.

spack.main.spack_working_dir = None

Recorded directory where spack command was originally invoked

spack.main.stat_names = {'calls': (((1, -1),), 'call count'), 'cumtime': (((3, -1),), 'cumulative time'), 'cumulative': (((3, -1),), 'cumulative time'), 'filename': (((4, 1),), 'file name'), 'line': (((5, 1),), 'line number'), 'module': (((4, 1),), 'file name'), 'name': (((6, 1),), 'function name'), 'ncalls': (((1, -1),), 'call count'), 'nfl': (((6, 1), (4, 1), (5, 1)), 'name/file/line'), 'pcalls': (((0, -1),), 'primitive call count'), 'stdname': (((7, 1),), 'standard name'), 'time': (((2, -1),), 'internal time'), 'tottime': (((2, -1),), 'internal time')}

names of profile statistics

spack.mirror module

This file contains code for creating spack mirror directories. A mirror is an organized hierarchy containing specially named archive files. This enabled spack to know where to find files in a mirror if the main server for a particular package is down. Or, if the computer where spack is run is not connected to the internet, it allows spack to download packages directly from a mirror (e.g., on an intranet).

class spack.mirror.Mirror(fetch_url, push_url=None, name=None)[source]

Bases: object

Represents a named location for storing source tarballs and binary packages.

Mirrors have a fetch_url that indicate where and how artifacts are fetched from them, and a push_url that indicate where and how artifacts are pushed to them. These two URLs are usually the same.

display(max_len=0)[source]
property fetch_url
static from_dict(d, name=None)[source]
static from_json(stream, name=None)[source]
static from_yaml(stream, name=None)[source]
get_access_pair(url_type)[source]
get_access_token(url_type)[source]
get_endpoint_url(url_type)[source]
get_profile(url_type)[source]
property name
property push_url
set_access_pair(url_type, connection_tuple)[source]
set_access_token(url_type, connection_token)[source]
set_endpoint_url(url_type, url)[source]
set_profile(url_type, profile)[source]
to_dict()[source]
to_json(stream=None)[source]
to_yaml(stream=None)[source]
class spack.mirror.MirrorCollection(mirrors=None, scope=None)[source]

Bases: Mapping

A mapping of mirror names to mirrors.

display()[source]
static from_dict(d)[source]
static from_json(stream, name=None)[source]
static from_yaml(stream, name=None)[source]
lookup(name_or_url)[source]

Looks up and returns a Mirror.

If this MirrorCollection contains a named Mirror under the name [name_or_url], then that mirror is returned. Otherwise, [name_or_url] is assumed to be a mirror URL, and an anonymous mirror with the given URL is returned.

to_dict(recursive=False)[source]
to_json(stream=None)[source]
to_yaml(stream=None)[source]
exception spack.mirror.MirrorError(msg, long_msg=None)[source]

Bases: SpackError

Superclass of all mirror-creation related errors.

class spack.mirror.MirrorReference(cosmetic_path, global_path=None)[source]

Bases: object

A MirrorReference stores the relative paths where you can store a package/resource in a mirror directory.

The appropriate storage location is given by storage_path. The cosmetic_path property provides a reference that a human could generate themselves based on reading the details of the package.

A user can iterate over a MirrorReference object to get all the possible names that might be used to refer to the resource in a mirror; this includes names generated by previous naming schemes that are no-longer reported by storage_path or cosmetic_path.

property storage_path
class spack.mirror.MirrorStats[source]

Bases: object

added(resource)[source]
already_existed(resource)[source]
error()[source]
next_spec(spec)[source]
stats()[source]
spack.mirror.add(name, url, scope, args={})[source]

Add a named mirror in the given scope

spack.mirror.create(path, specs, skip_unstable_versions=False)[source]

Create a directory to be used as a spack mirror, and fill it with package archives.

Parameters
  • path – Path to create a mirror directory hierarchy in.

  • specs – Any package versions matching these specs will be added to the mirror.

  • skip_unstable_versions – if true, this skips adding resources when they do not have a stable archive checksum (as determined by fetch_strategy.stable_target)

Return Value:

Returns a tuple of lists: (present, mirrored, error)

  • present: Package specs that were already present.

  • mirrored: Package specs that were successfully mirrored.

  • error: Package specs that failed to mirror due to some error.

spack.mirror.create_mirror_from_package_object(pkg_obj, mirror_cache, mirror_stats)[source]

Add a single package object to a mirror.

The package object is only required to have an associated spec with a concrete version.

Parameters
Returns

True if the spec was added successfully, False otherwise

spack.mirror.get_all_versions(specs)[source]

Given a set of initial specs, return a new set of specs that includes each version of each package in the original set.

Note that if any spec in the original set specifies properties other than version, this information will be omitted in the new set; for example; the new set of specs will not include variant settings.

spack.mirror.get_matching_versions(specs, num_versions=1)[source]

Get a spec for EACH known version matching any spec in the list. For concrete specs, this retrieves the concrete version and, if more than one version per spec is requested, retrieves the latest versions of the package.

spack.mirror.mirror_archive_paths(fetcher, per_package_ref, spec=None)[source]

Returns a MirrorReference object which keeps track of the relative storage path of the resource associated with the specified fetcher.

spack.mirror.mirror_cache_and_stats(path, skip_unstable_versions=False)[source]

Return both a mirror cache and a mirror stats, starting from the path where a mirror ought to be created.

Parameters
  • path (str) – path to create a mirror directory hierarchy in.

  • skip_unstable_versions – if true, this skips adding resources when they do not have a stable archive checksum (as determined by fetch_strategy.stable_target)

spack.mirror.push_url_from_directory(output_directory)[source]

Given a directory in the local filesystem, return the URL on which to push binary packages.

spack.mirror.push_url_from_mirror_name(mirror_name)[source]

Given a mirror name, return the URL on which to push binary packages.

spack.mirror.push_url_from_mirror_url(mirror_url)[source]

Given a mirror URL, return the URL on which to push binary packages.

spack.mirror.remove(name, scope)[source]

Remove the named mirror in the given scope

spack.mixins module

This module contains additional behavior that can be attached to any given package.

spack.mixins.filter_compiler_wrappers(*files, **kwargs)[source]

Substitutes any path referring to a Spack compiler wrapper with the path of the underlying compiler that has been used.

If this isn’t done, the files will have CC, CXX, F77, and FC set to Spack’s generic cc, c++, f77, and f90. We want them to be bound to whatever compiler they were built with.

Parameters
  • *files – files to be filtered relative to the search root (which is, by default, the installation prefix)

  • **kwargs

    allowed keyword arguments

    after

    specifies after which phase the files should be filtered (defaults to ‘install’)

    relative_root

    path relative to prefix where to start searching for the files to be filtered. If not set the install prefix wil be used as the search root. It is highly recommended to set this, as searching from the installation prefix may affect performance severely in some cases.

    ignore_absent, backup

    these two keyword arguments, if present, will be forwarded to filter_file (see its documentation for more information on their behavior)

    recursive

    this keyword argument, if present, will be forwarded to find (see its documentation for more information on the behavior)

spack.multimethod module

This module contains utilities for using multi-methods in spack. You can think of multi-methods like overloaded methods – they’re methods with the same name, and we need to select a version of the method based on some criteria. e.g., for overloaded methods, you would select a version of the method to call based on the types of its arguments.

In spack, multi-methods are used to ease the life of package authors. They allow methods like install() (or other methods called by install()) to declare multiple versions to be called when the package is instantiated with different specs. e.g., if the package is built with OpenMPI on x86_64,, you might want to call a different install method than if it was built for mpich2 on BlueGene/Q. Likewise, you might want to do a different type of install for different versions of the package.

Multi-methods provide a simple decorator-based syntax for this that avoids overly complicated rat nests of if statements. Obviously, depending on the scenario, regular old conditionals might be clearer, so package authors should use their judgement.

exception spack.multimethod.MultiMethodError(message)[source]

Bases: SpackError

Superclass for multimethod dispatch errors

class spack.multimethod.MultiMethodMeta(name, bases, attr_dict)[source]

Bases: type

This allows us to track the class’s dict during instantiation.

exception spack.multimethod.NoSuchMethodError(cls, method_name, spec, possible_specs)[source]

Bases: SpackError

Raised when we can’t find a version of a multi-method.

class spack.multimethod.SpecMultiMethod(default=None)[source]

Bases: object

This implements a multi-method for Spack specs. Packages are instantiated with a particular spec, and you may want to execute different versions of methods based on what the spec looks like. For example, you might want to call a different version of install() for one platform than you call on another.

The SpecMultiMethod class implements a callable object that handles method dispatch. When it is called, it looks through registered methods and their associated specs, and it tries to find one that matches the package’s spec. If it finds one (and only one), it will call that method.

This is intended for use with decorators (see below). The decorator (see docs below) creates SpecMultiMethods and registers method versions with them.

To register a method, you can do something like this:

mm = SpecMultiMethod() mm.register(“^chaos_5_x86_64_ib”, some_method)

The object registered needs to be a Spec or some string that will parse to be a valid spec.

When the mm is actually called, it selects a version of the method to call based on the sys_type of the object it is called on.

See the docs for decorators below for more details.

register(spec, method)[source]

Register a version of a method for a particular spec.

class spack.multimethod.when(condition)[source]

Bases: object

spack.package module

spack.util.package is a set of useful build tools and directives for packages.

Everything in this module is automatically imported into Spack package files.

spack.package_base module

This is where most of the action happens in Spack.

The spack package class structure is based strongly on Homebrew (http://brew.sh/), mainly because Homebrew makes it very easy to create packages.

exception spack.package_base.ActivationError(msg, long_msg=None)[source]

Bases: ExtensionError

Raised when there are problems activating an extension.

exception spack.package_base.DependencyConflictError(conflict)[source]

Bases: SpackError

Raised when the dependencies cannot be flattened as asked for.

class spack.package_base.DetectablePackageMeta(name, bases, attr_dict)[source]

Bases: object

Check if a package is detectable and add default implementations for the detection function.

exception spack.package_base.ExtensionError(message, long_msg=None)[source]

Bases: PackageError

Superclass for all errors having to do with extension packages.

exception spack.package_base.InvalidPackageOpError(message, long_msg=None)[source]

Bases: PackageError

Raised when someone tries perform an invalid operation on a package.

exception spack.package_base.NoURLError(cls)[source]

Bases: PackageError

Raised when someone tries to build a URL for a package with no URLs.

class spack.package_base.PackageBase(spec)[source]

Bases: WindowsRPathMeta, PackageViewMixin, object

This is the superclass for all spack packages.

*The Package class*

At its core, a package consists of a set of software to be installed. A package may focus on a piece of software and its associated software dependencies or it may simply be a set, or bundle, of software. The former requires defining how to fetch, verify (via, e.g., sha256), build, and install that software and the packages it depends on, so that dependencies can be installed along with the package itself. The latter, sometimes referred to as a no-source package, requires only defining the packages to be built.

Packages are written in pure Python.

There are two main parts of a Spack package:

  1. The package class. Classes contain directives, which are special functions, that add metadata (versions, patches, dependencies, and other information) to packages (see directives.py). Directives provide the constraints that are used as input to the concretizer.

  2. Package instances. Once instantiated, a package is essentially a software installer. Spack calls methods like do_install() on the Package object, and it uses those to drive user-implemented methods like patch(), install(), and other build steps. To install software, an instantiated package needs a concrete spec, which guides the behavior of the various install methods.

Packages are imported from repos (see repo.py).

Package DSL

Look in lib/spack/docs or check https://spack.readthedocs.io for the full documentation of the package domain-specific language. That used to be partially documented here, but as it grew, the docs here became increasingly out of date.

Package Lifecycle

A package’s lifecycle over a run of Spack looks something like this:

p = Package()             # Done for you by spack

p.do_fetch()              # downloads tarball from a URL (or VCS)
p.do_stage()              # expands tarball in a temp directory
p.do_patch()              # applies patches to expanded source
p.do_install()            # calls package's install() function
p.do_uninstall()          # removes install directory

although packages that do not have code have nothing to fetch so omit p.do_fetch().

There are also some other commands that clean the build area:

p.do_clean()              # removes the stage directory entirely
p.do_restage()            # removes the build directory and
                          # re-expands the archive.

The convention used here is that a do_* function is intended to be called internally by Spack commands (in spack.cmd). These aren’t for package writers to override, and doing so may break the functionality of the Package class.

Package creators have a lot of freedom, and they could technically override anything in this class. That is not usually required.

For most use cases. Package creators typically just add attributes like homepage and, for a code-based package, url, or functions such as install(). There are many custom Package subclasses in the spack.build_systems package that make things even easier for specific build systems.

classmethod all_patches()[source]

Retrieve all patches associated with the package.

Retrieves patches on the package itself as well as patches on the dependencies of the package.

property all_urls

A list of all URLs in a package.

Check both class-level and version-specific URLs.

Returns

a list of URLs

Return type

list

all_urls_for_version(version)[source]

Return all URLs derived from version_urls(), url, urls, and list_url (if it contains a version) in a package in that order.

Parameters

version (spack.version.Version) – the version for which a URL is sought

property build_log_path

Return the expected (or current) build log file path. The path points to the staging build file until the software is successfully installed, when it points to the file in the installation directory.

classmethod build_system_flags(name, flags)[source]

flag_handler that passes flags to the build system arguments. Any package using build_system_flags must also implement flags_to_build_system_args, or derive from a class that implements it. Currently, AutotoolsPackage and CMakePackage implement it.

property builder
cache_extra_test_sources(srcs)[source]

Copy relative source paths to the corresponding install test subdir

This method is intended as an optional install test setup helper for grabbing source files/directories during the installation process and copying them to the installation test subdirectory for subsequent use during install testing.

Parameters

srcs (str or list) – relative path for files and or subdirectories located in the staged source path that are to be copied to the corresponding location(s) under the install testing directory.

property cmake_prefix_paths
property compiler

Get the spack.compiler.Compiler object used to build this package

property configure_args_path

Return the configure args file path associated with staging.

content_hash(content=None)[source]

Create a hash based on the artifacts and patches used to build this package.

This includes:
  • source artifacts (tarballs, repositories) used to build;

  • content hashes (sha256’s) of all patches applied by Spack; and

  • canonicalized contents the package.py recipe used to build.

This hash is only included in Spack’s DAG hash for concrete specs, but if it happens to be called on a package with an abstract spec, only applicable (i.e., determinable) portions of the hash will be included.

classmethod dependencies_of_type(*deptypes)[source]

Get dependencies that can possibly have these deptypes.

This analyzes the package and determines which dependencies can be a certain kind of dependency. Note that they may not always be this kind of dependency, since dependencies can be optional, so something may be a build dependency in one configuration and a run dependency in another.

do_clean()[source]

Removes the package’s build stage and source tarball.

do_deprecate(deprecator, link_fn)[source]

Deprecate this package in favor of deprecator spec

do_fetch(mirror_only=False)[source]

Creates a stage directory and downloads the tarball for this package. Working directory will be set to the stage directory.

do_install(**kwargs)[source]

Called by commands to install a package and or its dependencies.

Package implementations should override install() to describe their build process.

Parameters
  • cache_only (bool) – Fail if binary package unavailable.

  • dirty (bool) – Don’t clean the build environment before installing.

  • explicit (bool) – True if package was explicitly installed, False if package was implicitly installed (as a dependency).

  • fail_fast (bool) – Fail if any dependency fails to install; otherwise, the default is to install as many dependencies as possible (i.e., best effort installation).

  • fake (bool) – Don’t really build; install fake stub files instead.

  • force (bool) – Install again, even if already installed.

  • install_deps (bool) – Install dependencies before installing this package

  • install_source (bool) – By default, source is not installed, but for debugging it might be useful to keep it around.

  • keep_prefix (bool) – Keep install prefix on failure. By default, destroys it.

  • keep_stage (bool) – By default, stage is destroyed only if there are no exceptions during build. Set to True to keep the stage even with exceptions.

  • restage (bool) – Force spack to restage the package source.

  • skip_patch (bool) – Skip patch stage of build if True.

  • stop_before (str) – stop execution before this installation phase (or None)

  • stop_at (str) – last installation phase to be executed (or None)

  • tests (bool or list or set) – False to run no tests, True to test all packages, or a list of package names to run tests for some

  • use_cache (bool) – Install from binary package, if available.

  • verbose (bool) – Display verbose build output (by default, suppresses it)

do_patch()[source]

Applies patches if they haven’t been applied already.

do_restage()[source]

Reverts expanded/checked out source to a pristine state.

do_stage(mirror_only=False)[source]

Unpacks and expands the fetched tarball.

do_test(dirty=False, externals=False)[source]
do_uninstall(force=False)[source]

Uninstall this package by spec.

property download_instr

Defines the default manual download instructions. Packages can override the property to provide more information.

Returns

default manual download instructions

Return type

(str)

classmethod env_flags(name, flags)[source]

flag_handler that adds all flags to canonical environment variables.

property env_mods_path

Return the build environment modifications file path associated with staging.

property env_path

Return the build environment file path associated with staging.

extendable = False

Most packages are NOT extendable. Set to True if you want extensions.

property extendee_args

Spec of the extendee of this package, or None if it is not an extension

property extendee_spec

Spec of the extendee of this package, or None if it is not an extension

extends(spec)[source]

Returns True if this package extends the given spec.

If self.spec is concrete, this returns whether this package extends the given spec.

If self.spec is not concrete, this returns whether this package may extend the given spec.

fetch_options = {}

Set of additional options used when fetching package versions.

fetch_remote_versions(concurrency=128)[source]

Find remote versions of this package.

Uses list_url and any other URLs listed in the package file.

Returns

a dictionary mapping versions to URLs

Return type

dict

property fetcher
find_valid_url_for_version(version)[source]

Returns a URL from which the specified version of this package may be downloaded after testing whether the url is valid. Will try url, urls, and list_url before failing.

version: class Version

The version for which a URL is sought.

See Class Version (version.py)

property flag_handler
flags_to_build_system_args(flags)[source]
classmethod format_doc(**kwargs)[source]

Wrap doc string at 72 characters and format nicely

fullname = 'spack.package_base'
fullnames = ['spack.package_base']
global_license_dir = '/home/docs/checkouts/readthedocs.org/user_builds/spack/checkouts/v0.19.1/lib/spack/docs/_spack_root/etc/spack/licenses'
property global_license_file

Returns the path where a global license file for this particular package should be stored.

has_code = True

Most Spack packages are used to install source or binary code while those that do not can be used to install a set of other Spack packages.

property home
homepage = None

Package homepage where users can find more information about the package

classmethod inject_flags(name, flags)[source]

flag_handler that injects all flags through the compiler wrapper.

property install_configure_args_path

Return the configure args file path on successful installation.

property install_env_path

Return the build environment file path on successful installation.

property install_log_path

Return the build log file path on successful installation.

property install_test_install_log_path

Return the install location for the install phase-time test log.

property install_test_root

Return the install test root directory.

property installed
property installed_upstream
property is_extension
license_comment = '#'

String. Contains the symbol used by the license manager to denote a comment. Defaults to #.

license_files = []

List of strings. These are files that the software searches for when looking for a license. All file paths must be relative to the installation directory. More complex packages like Intel may require multiple licenses for individual components. Defaults to the empty list.

license_required = False

Boolean. If set to True, this software requires a license. If set to False, all of the license_* attributes will be ignored. Defaults to False.

license_url = ''

String. A URL pointing to license setup instructions for the software. Defaults to the empty string.

license_vars = []

List of strings. Environment variables that can be set to tell the software where to look for a license if it is not in the usual location. Defaults to the empty list.

list_depth = 0

Link depth to which list_url should be searched for new versions

list_url = None

Default list URL (place to find available versions)

property log_path

Return the build log file path associated with staging.

maintainers = []

List of strings which contains GitHub usernames of package maintainers. Do not include @ here in order not to unnecessarily ping the users.

manual_download = False

Boolean. Set to True for packages that require a manual download. This is currently used by package sanity tests and generation of a more meaningful fetch failure error.

metadata_attrs = ['homepage', 'url', 'urls', 'list_url', 'extendable', 'parallel', 'make_jobs', 'maintainers', 'tags']

List of attributes to be excluded from a package’s hash.

property metadata_dir

Return the install metadata directory.

module = <module 'spack.package_base' from '/home/docs/checkouts/readthedocs.org/user_builds/spack/checkouts/v0.19.1/lib/spack/docs/_spack_root/lib/spack/spack/package_base.py'>
name = 'package_base'
namespace = 'spack'
nearest_url(version)[source]

Finds the URL with the “closest” version to version.

This uses the following precedence order:

  1. Find the next lowest or equal version with a URL.

  2. If no lower URL, return the next higher URL.

  3. If no higher URL, return None.

non_bindable_shared_objects = []

List of shared objects that should be replaced with a different library at runtime. Typically includes stub libraries like libcuda.so. When linking against a library listed here, the dependent will only record its soname or filename, not its absolute path, so that the dynamic linker will search for it. Note: accepts both file names and directory names, for example ["libcuda.so", "stubs"] will ensure libcuda.so and all libraries in the stubs directory are not bound by path.”””

package_dir = '/home/docs/checkouts/readthedocs.org/user_builds/spack/checkouts/v0.19.1/lib/spack/docs/_spack_root/lib/spack/spack'
parallel = True

By default we build in parallel. Subclasses can override this.

property phase_log_files

Find sorted phase log files written to the staging directory

classmethod possible_dependencies(transitive=True, expand_virtuals=True, deptype='all', visited=None, missing=None, virtuals=None)[source]

Return dict of possible dependencies of this package.

Parameters
  • transitive (bool or None) – return all transitive dependencies if True, only direct dependencies if False (default True)..

  • expand_virtuals (bool or None) – expand virtual dependencies into all possible implementations (default True)

  • deptype (str or tuple or None) – dependency types to consider

  • visited (dict or None) – dict of names of dependencies visited so far, mapped to their immediate dependencies’ names.

  • missing (dict or None) – dict to populate with packages and their missing dependencies.

  • virtuals (set) – if provided, populate with virtuals seen so far.

Returns

dictionary mapping dependency names to their

immediate dependencies

Return type

(dict)

Each item in the returned dictionary maps a (potentially transitive) dependency of this package to its possible immediate dependencies. If expand_virtuals is False, virtual package names wil be inserted as keys mapped to empty sets of dependencies. Virtuals, if not expanded, are treated as though they have no immediate dependencies.

Missing dependencies by default are ignored, but if a missing dict is provided, it will be populated with package names mapped to any dependencies they have that are in no repositories. This is only populated if transitive is True.

Note: the returned dict includes the package itself.

property prefix

Get the prefix into which this package should be installed.

provides(vpkg_name)[source]

True if this package provides a virtual package with the specified name

remove_prefix()[source]

Removes the prefix for a package along with any empty parent directories

property rpath

Get the rpath this package links with, as a list of paths.

property rpath_args

Get the rpath args as a string, with -Wl,-rpath, for each element

run_test(exe, options=[], expected=[], status=0, installed=False, purpose='', skip_missing=False, work_dir=None)[source]

Run the test and confirm the expected results are obtained

Log any failures and continue, they will be re-raised later

Parameters
  • exe (str) – the name of the executable

  • options (str or list) – list of options to pass to the runner

  • expected (str or list) – list of expected output strings. Each string is a regex expected to match part of the output.

  • status (int or list) – possible passing status values with 0 meaning the test is expected to succeed

  • installed (bool) – if True, the executable must be in the install prefix

  • purpose (str) – message to display before running test

  • skip_missing (bool) – skip the test if the executable is not in the install prefix bin directory or the provided work_dir

  • work_dir (str or None) – path to the smoke test directory

static run_test_callbacks(builder, method_names, callback_type='install')[source]

Tries to call all of the listed methods, returning immediately if the list is None.

run_tests = False

By default do not run tests within package’s install()

sanity_check_is_dir = []

List of prefix-relative directory paths (or a single path). If these do not exist after install, or if they exist but are not directories, sanity checks will fail.

sanity_check_is_file = []

List of prefix-relative file paths (or a single path). If these do not exist after install, or if they exist but are not files, sanity checks fail.

setup_dependent_package(module, dependent_spec)[source]

Set up Python module-scope variables for dependent packages.

Called before the install() method of dependents.

Default implementation does nothing, but this can be overridden by an extendable package to set up the module of its extensions. This is useful if there are some common steps to installing all extensions for a certain package.

Examples:

  1. Extensions often need to invoke the python interpreter from the Python installation being extended. This routine can put a python() Executable object in the module scope for the extension package to simplify extension installs.

  2. MPI compilers could set some variables in the dependent’s scope that point to mpicc, mpicxx, etc., allowing them to be called by common name regardless of which MPI is used.

  3. BLAS/LAPACK implementations can set some variables indicating the path to their libraries, since these paths differ by BLAS/LAPACK implementation.

Parameters
  • module (spack.package_base.PackageBase.module) – The Python module object of the dependent package. Packages can use this to set module-scope variables for the dependent to use.

  • dependent_spec (spack.spec.Spec) – The spec of the dependent package about to be built. This allows the extendee (self) to query the dependent’s state. Note that this package’s spec is available as self.spec.

setup_dependent_run_environment(env, dependent_spec)[source]

Sets up the run environment of packages that depend on this one.

This is similar to setup_run_environment, but it is used to modify the run environments of packages that depend on this one.

This gives packages like Python and others that follow the extension model a way to implement common environment or run-time settings for dependencies.

Parameters
  • env (spack.util.environment.EnvironmentModifications) – environment modifications to be applied when the dependent package is run. Package authors can call methods on it to alter the build environment.

  • dependent_spec (spack.spec.Spec) – The spec of the dependent package about to be run. This allows the extendee (self) to query the dependent’s state. Note that this package’s spec is available as self.spec

setup_run_environment(env)[source]

Sets up the run environment for a package.

Parameters

env (spack.util.environment.EnvironmentModifications) – environment modifications to be applied when the package is run. Package authors can call methods on it to alter the run environment.

property stage

Get the build staging area for this package.

This automatically instantiates a Stage object if the package doesn’t have one yet, but it does not create the Stage directory on the filesystem.

test()[source]
test_failures = None

List of test failures encountered during a smoke/install test run.

property test_install_log_path

Return the install phase-time test log file path, if set.

test_requires_compiler = False

Boolean. If set to True, the smoke/install test requires a compiler. This is currently used by smoke tests to ensure a compiler is available to build a custom test code.

test_suite = None

TestSuite instance used to manage smoke/install tests for one or more specs.

property times_log_path

Return the times log json file.

transitive_rpaths = True

When True, add RPATHs for the entire DAG. When False, add RPATHs only for immediate dependencies.

static uninstall_by_spec(spec, force=False, deprecator=None)[source]
unit_test_check()[source]

Hook for unit tests to assert things about package internals.

Unit tests can override this function to perform checks after Package.install and all post-install hooks run, but before the database is updated.

The overridden function may indicate that the install procedure should terminate early (before updating the database) by returning False (or any value such that bool(result) is False).

Returns

True to continue, False to skip install()

Return type

(bool)

update_external_dependencies()[source]

Method to override in package classes to handle external dependencies

url_for_version(version)[source]

Returns a URL from which the specified version of this package may be downloaded.

version: class Version

The version for which a URL is sought.

See Class Version (version.py)

url_version(version)[source]

Given a version, this returns a string that should be substituted into the package’s URL to download that version.

By default, this just returns the version string. Subclasses may need to override this, e.g. for boost versions where you need to ensure that there are _’s in the download URL.

use_xcode = False

By default do not setup mockup XCode on macOS with Clang

property version
classmethod version_urls()[source]

OrderedDict of explicitly defined URLs for versions of this package.

Returns

An OrderedDict (version -> URL) different versions of this package, sorted by version.

A version’s URL only appears in the result if it has an an explicitly defined url argument. So, this list may be empty if a package only defines url at the top level.

view()[source]

Create a view with the prefix of this package as the root. Extensions added to this view will modify the installation prefix of this package.

virtual = False

By default, packages are not virtual Virtual packages override this attribute

property virtuals_provided

virtual packages provided by this package with its spec

exception spack.package_base.PackageError(message, long_msg=None)[source]

Bases: SpackError

Raised when something is wrong with a package definition.

class spack.package_base.PackageMeta(name, bases, attr_dict)[source]

Bases: PhaseCallbacksMeta, DetectablePackageMeta, DirectiveMeta, MultiMethodMeta

Package metaclass for supporting directives (e.g., depends_on) and phases

exception spack.package_base.PackageStillNeededError(spec, dependents)[source]

Bases: InstallError

Raised when package is still needed by another on uninstall.

exception spack.package_base.PackageVersionError(version)[source]

Bases: PackageError

Raised when a version URL cannot automatically be determined.

class spack.package_base.PackageViewMixin[source]

Bases: object

This collects all functionality related to adding installed Spack package to views. Packages can customize how they are added to views by overriding these functions.

add_files_to_view(view, merge_map, skip_if_exists=True)[source]

Given a map of package files to destination paths in the view, add the files to the view. By default this adds all files. Alternative implementations may skip some files, for example if other packages linked into the view already include the file.

Parameters
  • view (spack.filesystem_view.FilesystemView) – the view that’s updated

  • merge_map (dict) – maps absolute source paths to absolute dest paths for all files in from this package.

  • skip_if_exists (bool) – when True, don’t link files in view when they already exist. When False, always link files, without checking if they already exist.

remove_files_from_view(view, merge_map)[source]

Given a map of package files to files currently linked in the view, remove the files from the view. The default implementation removes all files. Alternative implementations may not remove all files. For example if two packages include the same file, it should only be removed when both packages are removed.

view_destination(view)[source]

The target root directory: each file is added relative to this directory.

view_file_conflicts(view, merge_map)[source]

Report any files which prevent adding this package to the view. The default implementation looks for any files which already exist. Alternative implementations may allow some of the files to exist in the view (in this case they would be omitted from the results).

view_source()[source]

The source root directory that will be added to the view: files are added such that their path relative to the view destination matches their path relative to the view source.

class spack.package_base.WindowsRPathMeta[source]

Bases: object

Collection of functionality surrounding Windows RPATH specific features

This is essentially meaningless for all other platforms due to their use of RPATH. All methods within this class are no-ops on non Windows. Packages can customize and manipulate this class as they would a genuine RPATH, i.e. adding directories that contain runtime library dependencies

win_add_library_dependent()[source]

Return extra set of directories that require linking for package

This method should be overridden by packages that produce binaries/libraries/python extension modules/etc that are installed into directories outside a package’s bin, lib, and lib64 directories, but still require linking against one of the packages dependencies, or other components of the package itself. No-op otherwise.

Returns

List of additional directories that require linking

win_add_rpath()[source]

Return extra set of rpaths for package

This method should be overridden by packages needing to include additional paths to be searched by rpath. No-op otherwise

Returns

List of additional rpaths

windows_establish_runtime_linkage()[source]

Establish RPATH on Windows

Performs symlinking to incorporate rpath dependencies to Windows runtime search paths

spack.package_base.build_system_flags(name, flags)

flag_handler that passes flags to the build system arguments. Any package using build_system_flags must also implement flags_to_build_system_args, or derive from a class that implements it. Currently, AutotoolsPackage and CMakePackage implement it.

spack.package_base.deprecated_version(pkg, version)[source]

Return True if the version is deprecated, False otherwise.

Parameters
  • pkg (PackageBase) – The package whose version is to be checked.

  • version (str or spack.version.VersionBase) – The version being checked

spack.package_base.detectable_packages = {}

Registers which are the detectable packages, by repo and package name Need a pass of package repositories to be filled.

spack.package_base.env_flags(name, flags)

flag_handler that adds all flags to canonical environment variables.

spack.package_base.flatten_dependencies(spec, flat_dir)[source]

Make each dependency of spec present in dir via symlink.

spack.package_base.has_test_method(pkg)[source]

Determine if the package defines its own stand-alone test method.

Parameters

pkg (str) – the package being checked

Returns

True if the package overrides the default method; else

False

Return type

(bool)

spack.package_base.inject_flags(name, flags)

flag_handler that injects all flags through the compiler wrapper.

Execute a dummy install and flatten dependencies.

This routine can be used in a package.py definition by setting install = install_dependency_symlinks.

This feature comes in handy for creating a common location for the the installation of third-party libraries.

spack.package_base.on_package_attributes(**attr_dict)[source]

Decorator: executes instance function only if object has attr valuses.

Executes the decorated method only if at the moment of calling the instance has attributes that are equal to certain values.

Parameters

attr_dict (dict) – dictionary mapping attribute names to their required values

spack.package_base.possible_dependencies(*pkg_or_spec, **kwargs)[source]

Get the possible dependencies of a number of packages.

See PackageBase.possible_dependencies for details.

spack.package_base.preferred_version(pkg)[source]

Returns a sorted list of the preferred versions of the package.

Parameters

pkg (PackageBase) – The package whose versions are to be assessed.

spack.package_base.print_test_message(logger, msg, verbose)[source]
spack.package_base.test_log_pathname(test_stage, spec)[source]

Build the pathname of the test log file

Parameters
  • test_stage (str) – path to the test stage directory

  • spec (spack.spec.Spec) – instance of the spec under test

Returns

the pathname of the test log file

Return type

(str)

spack.package_base.test_process(pkg, kwargs)[source]
spack.package_base.use_cray_compiler_names()[source]

Compiler names for builds that rely on cray compiler names.

spack.package_prefs module

class spack.package_prefs.PackagePrefs(pkgname, component, vpkg=None, all=True)[source]

Bases: object

Defines the sort order for a set of specs.

Spack’s package preference implementation uses PackagePrefss to define sort order. The PackagePrefs class looks at Spack’s packages.yaml configuration and, when called on a spec, returns a key that can be used to sort that spec in order of the user’s preferences.

You can use it like this:

# key function sorts CompilerSpecs for mpich in order of preference kf = PackagePrefs(‘mpich’, ‘compiler’) compiler_list.sort(key=kf)

Or like this:

# key function to sort VersionLists for OpenMPI in order of preference. kf = PackagePrefs(‘openmpi’, ‘version’) version_list.sort(key=kf)

Optionally, you can sort in order of preferred virtual dependency providers. To do that, provide ‘providers’ and a third argument denoting the virtual package (e.g., mpi):

kf = PackagePrefs(‘trilinos’, ‘providers’, ‘mpi’) provider_spec_list.sort(key=kf)

classmethod has_preferred_providers(pkgname, vpkg)[source]

Whether specific package has a preferred vpkg providers.

classmethod has_preferred_targets(pkg_name)[source]

Whether specific package has a preferred vpkg providers.

classmethod order_for_package(pkgname, component, vpkg=None, all=True)[source]

Given a package name, sort component (e.g, version, compiler, …), and an optional vpkg, return the list from the packages config.

classmethod preferred_variants(pkg_name)[source]

Return a VariantMap of preferred variants/values for a spec.

exception spack.package_prefs.VirtualInPackagesYAMLError(message, long_message=None)[source]

Bases: SpackError

Raised when a disallowed virtual is found in packages.yaml

spack.package_prefs.get_package_dir_permissions(spec)[source]

Return the permissions configured for the spec.

Include the GID bit if group permissions are on. This makes the group attribute sticky for the directory. Package-specific settings take precedent over settings for all

spack.package_prefs.get_package_group(spec)[source]

Return the unix group associated with the spec.

Package-specific settings take precedence over settings for all

spack.package_prefs.get_package_permissions(spec)[source]

Return the permissions configured for the spec.

Package-specific settings take precedence over settings for all

spack.package_prefs.is_spec_buildable(spec)[source]

Return true if the spec is configured as buildable

spack.package_prefs.spec_externals(spec)[source]

Return a list of external specs (w/external directory path filled in), one for each known external installation.

spack.package_test module

spack.package_test.compare_output(current_output, blessed_output)[source]

Compare blessed and current output of executables.

spack.package_test.compare_output_file(current_output, blessed_output_file)[source]

Same as above, but when the blessed output is given as a file.

spack.package_test.compile_c_and_execute(source_file, include_flags, link_flags)[source]

Compile C @p source_file with @p include_flags and @p link_flags, run and return the output.

spack.parse module

exception spack.parse.LexError(message, string, pos)[source]

Bases: ParseError

Raised when we don’t know how to lex something.

class spack.parse.Lexer(lexicon0, mode_switches_01=[], lexicon1=[], mode_switches_10=[])[source]

Bases: object

Base class for Lexers that keep track of line numbers.

lex(text)[source]
lex_word(word)[source]
mode
mode_switches_01
mode_switches_10
scanner0
scanner1
token(type, value='')[source]
exception spack.parse.ParseError(message, string, pos)[source]

Bases: SpackError

Raised when we don’t hit an error while parsing.

class spack.parse.Parser(lexer)[source]

Bases: object

Base class for simple recursive descent parsers.

accept(id)[source]

Put the next symbol in self.token if accepted, then call gettok()

expect(id)[source]

Like accept(), but fails if we don’t like the next token.

gettok()[source]

Puts the next token in the input stream into self.next.

last_token_error(message)[source]

Raise an error about the previous token in the stream.

lexer
next
next_token_error(message)[source]

Raise an error about the next token in the stream.

parse(text)[source]
push_tokens(iterable)[source]

Adds all tokens in some iterable to the token stream.

setup(text)[source]
text
token
tokens
unexpected_token()[source]
class spack.parse.Token(type, value='', start=0, end=0)[source]

Bases: object

Represents tokens; generated from input by lexer and fed to parse().

end
is_a(type)[source]
start
type
value

spack.patch module

class spack.patch.FilePatch(pkg, relative_path, level, working_dir, ordering_key=None)[source]

Bases: Patch

Describes a patch that is retrieved from a file in the repository.

Parameters
  • pkg (str) – the class object for the package that owns the patch

  • relative_path (str) – path to patch, relative to the repository directory for a package.

  • level (int) – level to pass to patch command

  • working_dir (str) – path within the source directory where patch should be applied

property sha256
to_dict()[source]

Partial dictionary – subclases should add to this.

exception spack.patch.NoSuchPatchError(message, long_message=None)[source]

Bases: SpackError

Raised when a patch file doesn’t exist.

class spack.patch.Patch(pkg, path_or_url, level, working_dir)[source]

Bases: object

Base class for patches.

Parameters

pkg (str) – the package that owns the patch

The owning package is not necessarily the package to apply the patch to – in the case where a dependent package patches its dependency, it is the dependent’s fullname.

apply(stage)[source]

Apply a patch to source in a stage.

Parameters

stage (spack.stage.Stage) – stage where source code lives

clean()[source]

Clean up the patch stage in case of a UrlPatch

fetch()[source]

Fetch the patch in case of a UrlPatch

property stage
to_dict()[source]

Partial dictionary – subclases should add to this.

class spack.patch.PatchCache(repository, data=None)[source]

Bases: object

Index of patches used in a repository, by sha256 hash.

This allows us to look up patches without loading all packages. It’s also needed to properly implement dependency patching, as need a way to look up patches that come from packages not in the Spec sub-DAG.

The patch index is structured like this in a file (this is YAML, but we write JSON):

patches:
    sha256:
        namespace1.package1:
            <patch json>
        namespace2.package2:
            <patch json>
        ... etc. ...
classmethod from_json(stream, repository)[source]
patch_for_package(sha256, pkg)[source]

Look up a patch in the index and build a patch object for it.

Parameters

We build patch objects lazily because building them requires that we have information about the package’s location in its repo.

to_json(stream)[source]
update(other)[source]

Update this cache with the contents of another.

update_package(pkg_fullname)[source]
exception spack.patch.PatchDirectiveError(message, long_message=None)[source]

Bases: SpackError

Raised when the wrong arguments are suppled to the patch directive.

class spack.patch.UrlPatch(pkg, url, level=1, working_dir='.', ordering_key=None, **kwargs)[source]

Bases: Patch

Describes a patch that is retrieved from a URL.

Parameters
  • pkg (str) – the package that owns the patch

  • url (str) – URL where the patch can be fetched

  • level (int) – level to pass to patch command

  • working_dir (str) – path within the source directory where patch should be applied

clean()[source]

Clean up the patch stage in case of a UrlPatch

fetch()[source]

Retrieve the patch in a temporary stage and compute self.path

Parameters

stage – stage for the package that needs to be patched

property stage
to_dict()[source]

Partial dictionary – subclases should add to this.

spack.patch.apply_patch(stage, patch_path, level=1, working_dir='.')[source]

Apply the patch at patch_path to code in the stage.

Parameters
  • stage (spack.stage.Stage) – stage with code that will be patched

  • patch_path (str) – filesystem location for the patch to apply

  • level (int or None) – patch level (default 1)

  • working_dir (str) – relative path within the stage to change to (default ‘.’)

spack.patch.from_dict(dictionary, repository=None)[source]

Create a patch from json dictionary.

spack.paths module

Defines paths that are part of Spack’s directory structure.

Do not import other spack modules here. This module is used throughout Spack and should bring in a minimal number of external dependencies.

spack.paths.bin_path = '/home/docs/checkouts/readthedocs.org/user_builds/spack/checkouts/v0.19.1/lib/spack/docs/_spack_root/bin'

bin directory in the spack prefix

spack.paths.default_misc_cache_path = '/home/docs/.spack/cache'

transient caches for Spack data (virtual cache, patch sha256 lookup, etc.)

spack.paths.default_monitor_path = '/home/docs/.spack/reports/monitor'

spack monitor analysis directories

spack.paths.default_test_path = '/home/docs/.spack/test'

installation test (spack test) output

spack.paths.default_user_bootstrap_path = '/home/docs/.spack/bootstrap'

bootstrap store for bootstrapping clingo and other tools

spack.paths.prefix = '/home/docs/checkouts/readthedocs.org/user_builds/spack/checkouts/v0.19.1/lib/spack/docs/_spack_root'

This file lives in $prefix/lib/spack/spack/__file__

spack.paths.reports_path = '/home/docs/.spack/reports'

junit, cdash, etc. reports about builds

spack.paths.sbang_script = '/home/docs/checkouts/readthedocs.org/user_builds/spack/checkouts/v0.19.1/lib/spack/docs/_spack_root/bin/sbang'

The sbang script in the spack installation

spack.paths.spack_root = '/home/docs/checkouts/readthedocs.org/user_builds/spack/checkouts/v0.19.1/lib/spack/docs/_spack_root'

synonym for prefix

spack.paths.spack_script = '/home/docs/checkouts/readthedocs.org/user_builds/spack/checkouts/v0.19.1/lib/spack/docs/_spack_root/bin/spack'

The spack script itself

spack.paths.system_config_path = '/etc/spack'

System configuration location

spack.paths.user_config_path = '/home/docs/.spack'

User configuration location

spack.paths.user_repos_cache_path = '/home/docs/.spack/git_repos'

git repositories fetched to compare commits to versions

spack.projections module

spack.projections.get_projection(projections, spec)[source]

Get the projection for a spec from a projections dict.

spack.provider_index module

Classes and functions to manage providers of virtual dependencies

class spack.provider_index.ProviderIndex(repository, specs=None, restrict=False)[source]

Bases: _IndexBase

copy()[source]

Return a deep copy of this index.

static from_json(stream, repository)[source]

Construct a provider index from its JSON representation.

Parameters

stream – stream where to read from the JSON data

merge(other)[source]

Merge another provider index into this one.

Parameters

other (ProviderIndex) – provider index to be merged

remove_provider(pkg_name)[source]

Remove a provider from the ProviderIndex.

to_json(stream=None)[source]

Dump a JSON representation of this object.

Parameters

stream – stream where to dump

update(spec)[source]

Update the provider index with additional virtual specs.

Parameters

spec – spec potentially providing additional virtual specs

exception spack.provider_index.ProviderIndexError(message, long_message=None)[source]

Bases: SpackError

Raised when there is a problem with a ProviderIndex.

spack.relocate module

exception spack.relocate.BinaryStringReplacementError(file_path, old_len, new_len)[source]

Bases: SpackError

exception spack.relocate.BinaryTextReplaceError(msg)[source]

Bases: SpackError

exception spack.relocate.CannotGrowString(old, new)[source]

Bases: BinaryTextReplaceError

exception spack.relocate.CannotShrinkCString(old, new, full_old_string)[source]

Bases: BinaryTextReplaceError

exception spack.relocate.InstallRootStringError(file_path, root_path)[source]

Bases: SpackError

spack.relocate.apply_binary_replacements(f, prefix_to_prefix, suffix_safety_size=7)[source]

Given a file opened in rb+ mode, apply the string replacements as specified by an ordered dictionary of prefix to prefix mappings. This method takes special care of null-terminated C-strings. C-string constants are problematic because compilers and linkers optimize readonly strings for space by aliasing those that share a common suffix (only suffix since all of them are null terminated). See https://github.com/spack/spack/pull/31739 and https://github.com/spack/spack/pull/32253 for details. Our logic matches the original prefix with a suffix_safety_size + 1 lookahead for null bytes. If no null terminator is found, we simply pad with leading /, assuming that it’s a long C-string; the full C-string after replacement has a large suffix in common with its original value. If there is a null terminator we can do the same as long as the replacement has a sufficiently long common suffix with the original prefix. As a last resort when the replacement does not have a long enough common suffix, we can try to shorten the string, but this only works if the new length is sufficiently short (typically the case when going from large padding -> normal path) If the replacement string is longer, or all of the above fails, we error out.

Parameters
  • f – file opened in rb+ mode

  • prefix_to_prefix (OrderedDict) – OrderedDictionary where the keys are bytes representing the old prefixes and the values are the new

  • suffix_safety_size (int) – in case of null terminated strings, what size of the suffix should remain to avoid aliasing issues?

spack.relocate.file_is_relocatable(filename, paths_to_relocate=None)[source]

Returns True if the filename passed as argument is relocatable.

Parameters

filename – absolute path of the file to be analyzed

Returns

True or false

Raises

ValueError – if the filename does not exist or the path is not absolute

spack.relocate.fixup_macos_rpath(root, filename)[source]

Apply rpath fixups to the given file.

Parameters
  • root – absolute path to the parent directory

  • filename – relative path to the library or binary

Returns

True if fixups were applied, else False

spack.relocate.fixup_macos_rpaths(spec)[source]

Remove duplicate and nonexistent rpaths.

Some autotools packages write their own -rpath entries in addition to those implicitly added by the Spack compiler wrappers. On Linux these duplicate rpaths are eliminated, but on macOS they result in multiple entries which makes it harder to adjust with install_name_tool -delete_rpath.

spack.relocate.is_binary(filename)[source]

Returns true if a file is binary, False otherwise

Parameters

filename – file to be tested

Returns

True or False

spack.relocate.is_relocatable(spec)[source]

Returns True if an installed spec is relocatable.

Parameters

spec (spack.spec.Spec) – spec to be analyzed

Returns

True if the binaries of an installed spec are relocatable and False otherwise.

Raises

ValueError – if the spec is not installed

spack.relocate.macho_find_paths(orig_rpaths, deps, idpath, old_layout_root, prefix_to_prefix)[source]

Inputs original rpaths from mach-o binaries dependency libraries for mach-o binaries id path of mach-o libraries old install directory layout root prefix_to_prefix dictionary which maps prefixes in the old directory layout to directories in the new directory layout Output paths_to_paths dictionary which maps all of the old paths to new paths

spack.relocate.macho_make_paths_normal(orig_path_name, rpaths, deps, idpath)[source]

Return a dictionary mapping the relativized rpaths to the original rpaths. This dictionary is used to replace paths in mach-o binaries. Replace @loader_path’ with the dirname of the origname path name in rpaths and deps; idpath is replaced with the original path name

spack.relocate.macho_make_paths_relative(path_name, old_layout_root, rpaths, deps, idpath)[source]

Return a dictionary mapping the original rpaths to the relativized rpaths. This dictionary is used to replace paths in mach-o binaries. Replace old_dir with relative path from dirname of path name in rpaths and deps; idpath is replaced with @rpath/libname.

spack.relocate.macholib_get_paths(cur_path)[source]

Get rpaths, dependent libraries, and library id of mach-o objects.

spack.relocate.make_elf_binaries_relative(new_binaries, orig_binaries, orig_layout_root)[source]

Replace the original RPATHs in the new binaries making them relative to the original layout root.

Parameters
  • new_binaries (list) – new binaries whose RPATHs is to be made relative

  • orig_binaries (list) – original binaries

  • orig_layout_root (str) – path to be used as a base for making RPATHs relative

Compute the relative target from the original link and make the new link relative.

Parameters
  • new_links (list) – new links to be made relative

  • orig_links (list) – original links

spack.relocate.make_macho_binaries_relative(cur_path_names, orig_path_names, old_layout_root)[source]

Replace old RPATHs with paths relative to old_dir in binary files

spack.relocate.modify_macho_object(cur_path, rpaths, deps, idpath, paths_to_paths)[source]

This function is used to make machO buildcaches on macOS by replacing old paths with new paths using install_name_tool Inputs: mach-o binary to be modified original rpaths original dependency paths original id path if a mach-o library dictionary mapping paths in old install layout to new install layout

spack.relocate.modify_object_macholib(cur_path, paths_to_paths)[source]

This function is used when install machO buildcaches on linux by rewriting mach-o loader commands for dependency library paths of mach-o binaries and the id path for mach-o libraries. Rewritting of rpaths is handled by replace_prefix_bin. Inputs mach-o binary to be modified dictionary mapping paths in old install layout to new install layout

spack.relocate.needs_binary_relocation(m_type, m_subtype)[source]

Returns True if the file with MIME type/subtype passed as arguments needs binary relocation, False otherwise.

Parameters
  • m_type (str) – MIME type of the file

  • m_subtype (str) – MIME subtype of the file

spack.relocate.needs_text_relocation(m_type, m_subtype)[source]

Returns True if the file with MIME type/subtype passed as arguments needs text relocation, False otherwise.

Parameters
  • m_type (str) – MIME type of the file

  • m_subtype (str) – MIME subtype of the file

spack.relocate.new_relocate_elf_binaries(binaries, prefix_to_prefix)[source]

Take a list of binaries, and an ordered dictionary of prefix to prefix mapping, and update the rpaths accordingly.

spack.relocate.raise_if_not_relocatable(binaries, allow_root)[source]

Raise an error if any binary in the list is not relocatable.

Parameters
  • binaries (list) – list of binaries to check

  • allow_root (bool) – whether root dir is allowed or not in a binary

Raises

InstallRootStringError – if the file is not relocatable

spack.relocate.relocate_elf_binaries(binaries, orig_root, new_root, new_prefixes, rel, orig_prefix, new_prefix)[source]

Relocate the binaries passed as arguments by changing their RPATHs.

Use patchelf to get the original RPATHs and then replace them with rpaths in the new directory layout.

New RPATHs are determined from a dictionary mapping the prefixes in the old directory layout to the prefixes in the new directory layout if the rpath was in the old layout root, i.e. system paths are not replaced.

Parameters
  • binaries (list) – list of binaries that might need relocation, located in the new prefix

  • orig_root (str) – original root to be substituted

  • new_root (str) – new root to be used, only relevant for relative RPATHs

  • new_prefixes (dict) – dictionary that maps the original prefixes to where they should be relocated

  • rel (bool) – True if the RPATHs are relative, False if they are absolute

  • orig_prefix (str) – prefix where the executable was originally located

  • new_prefix (str) – prefix where we want to relocate the executable

Relocate links to a new install prefix.

spack.relocate.relocate_macho_binaries(path_names, old_layout_root, new_layout_root, prefix_to_prefix, rel, old_prefix, new_prefix)[source]

Use macholib python package to get the rpaths, depedent libraries and library identity for libraries from the MachO object. Modify them with the replacement paths queried from the dictionary mapping old layout prefixes to hashes and the dictionary mapping hashes to the new layout prefixes.

spack.relocate.unsafe_relocate_text(files, prefixes, concurrency=32)[source]

Relocate text file from the original installation prefix to the new prefix.

Relocation also affects the the path in Spack’s sbang script.

Note: unsafe when files contains duplicates, such as repeated paths, symlinks, hardlinks.

Parameters
  • files (list) – Text files to be relocated

  • prefixes (OrderedDict) – String prefixes which need to be changed

  • concurrency (int) – Preferred degree of parallelism

spack.relocate.unsafe_relocate_text_bin(binaries, prefixes, concurrency=32)[source]

Replace null terminated path strings hard coded into binaries.

The new install prefix must be shorter than the original one.

Note: unsafe when files contains duplicates, such as repeated paths, symlinks, hardlinks.

Parameters
  • binaries (list) – binaries to be relocated

  • prefixes (OrderedDict) – String prefixes which need to be changed.

  • concurrency (int) – Desired degree of parallelism.

Raises

BinaryTextReplaceError – when the new path is longer than the old path

spack.relocate.utf8_path_to_binary_regex(prefix)[source]

Create a (binary) regex that matches the input path in utf8

spack.relocate.utf8_paths_to_single_binary_regex(prefixes)[source]

Create a (binary) regex that matches any input path in utf8

spack.repo module

exception spack.repo.BadRepoError(message, long_message=None)[source]

Bases: RepoError

Raised when repo layout is invalid.

exception spack.repo.FailedConstructorError(name, exc_type, exc_obj, exc_tb)[source]

Bases: RepoError

Raised when a package’s class constructor fails.

class spack.repo.FastPackageChecker(packages_path)[source]

Bases: Mapping

Cache that maps package names to the stats obtained on the ‘package.py’ files associated with them.

For each repository a cache is maintained at class level, and shared among all instances referring to it. Update of the global cache is done lazily during instance initialization.

invalidate()[source]

Regenerate cache for this checker.

last_mtime()[source]
modified_since(since)[source]
class spack.repo.GitExe[source]

Bases: object

exception spack.repo.IndexError(message, long_message=None)[source]

Bases: RepoError

Raised when there’s an error with an index.

class spack.repo.Indexer(repository)[source]

Bases: object

Adaptor for indexes that need to be generated when repos are updated.

create()[source]
needs_update(pkg)[source]

Whether an update is needed when the package file hasn’t changed.

Returns

True if this package needs its index

updated, False otherwise.

Return type

(bool)

We already automatically update indexes when package files change, but other files (like patches) may change underneath the package file. This method can be used to check additional package-specific files whenever they’re loaded, to tell the RepoIndex to update the index just for that package.

abstract read(stream)[source]

Read this index from a provided file object.

abstract update(pkg_fullname)[source]

Update the index in memory with information about a package.

abstract write(stream)[source]

Write the index to a file object.

exception spack.repo.InvalidNamespaceError(message, long_message=None)[source]

Bases: RepoError

Raised when an invalid namespace is encountered.

class spack.repo.MockRepositoryBuilder(root_directory, namespace=None)[source]

Bases: object

Build a mock repository in a directory

add_package(name, dependencies=None)[source]

Create a mock package in the repository, using a Jinja2 template.

Parameters
  • name (str) – name of the new package

  • dependencies (list) – list of (“dep_spec”, “dep_type”, “condition”) tuples. Both “dep_type” and “condition” can default to None in which case spack.dependency.default_deptype and spack.spec.Spec() are used.

recipe_filename(name)[source]
remove(name)[source]
spack.repo.NOT_PROVIDED = <object object>

Guaranteed unused default value for some functions.

exception spack.repo.NoRepoConfiguredError(message, long_message=None)[source]

Bases: RepoError

Raised when there are no repositories configured.

class spack.repo.PatchIndexer(repository)[source]

Bases: Indexer

Lifecycle methods for patch cache.

needs_update()[source]

Whether an update is needed when the package file hasn’t changed.

Returns

True if this package needs its index

updated, False otherwise.

Return type

(bool)

We already automatically update indexes when package files change, but other files (like patches) may change underneath the package file. This method can be used to check additional package-specific files whenever they’re loaded, to tell the RepoIndex to update the index just for that package.

read(stream)[source]

Read this index from a provided file object.

update(pkg_fullname)[source]

Update the index in memory with information about a package.

write(stream)[source]

Write the index to a file object.

class spack.repo.ProviderIndexer(repository)[source]

Bases: Indexer

Lifecycle methods for virtual package providers.

read(stream)[source]

Read this index from a provided file object.

update(pkg_fullname)[source]

Update the index in memory with information about a package.

write(stream)[source]

Write the index to a file object.

spack.repo.ROOT_PYTHON_NAMESPACE = 'spack.pkg'

Package modules are imported as spack.pkg.<repo-namespace>.<pkg-name>

class spack.repo.Repo(root, cache=None)[source]

Bases: object

Class representing a package repository in the filesystem.

Each package repository must have a top-level configuration file called repo.yaml.

Currently, repo.yaml this must define:

namespace:

A Python namespace where the repository’s packages should live.

all_package_classes()[source]

Iterator over all package classes in the repository.

Use this with care, because loading packages is slow.

all_package_names(include_virtuals=False)[source]

Returns a sorted list of all package names in the Repo.

dirname_for_package_name(pkg_name)[source]

Get the directory name for a particular package. This is the directory that contains its package.py file.

dump_provenance(spec, path)[source]

Dump provenance information for a spec to a particular path.

This dumps the package file and any associated patch files. Raises UnknownPackageError if not found.

exists(pkg_name)[source]

Whether a package with the supplied name exists.

extensions_for(extendee_spec)[source]
filename_for_package_name(pkg_name)[source]

Get the filename for the module we should load for a particular package. Packages for a Repo live in $root/<package_name>/package.py

This will return a proper package.py path even if the package doesn’t exist yet, so callers will need to ensure the package exists before importing.

get(spec)[source]

Returns the package associated with the supplied spec.

get_pkg_class(pkg_name)[source]

Get the class for the package out of its module.

First loads (or fetches from cache) a module for the package. Then extracts the package class from the module according to Spack’s naming convention.

property index

Construct the index for this repo lazily.

is_prefix(fullname)[source]

True if fullname is a prefix of this Repo’s namespace.

is_virtual(pkg_name)[source]

Return True if the package with this name is virtual, False otherwise.

This function use the provider index. If calling from a code block that is used to construct the provider index use the is_virtual_safe function.

Parameters

pkg_name (str) – name of the package we want to check

is_virtual_safe(pkg_name)[source]

Return True if the package with this name is virtual, False otherwise.

This function doesn’t use the provider index.

Parameters

pkg_name (str) – name of the package we want to check

last_mtime()[source]

Time a package file in this repo was last updated.

packages_with_tags(*tags)[source]
property patch_index

Index of patches and packages they’re defined on.

property provider_index

A provider index with names specific to this repo.

providers_for(vpkg_spec)[source]
purge()[source]

Clear entire package instance cache.

real_name(import_name)[source]

Allow users to import Spack packages using Python identifiers.

A python identifier might map to many different Spack package names due to hyphen/underscore ambiguity.

Easy example:

num3proxy -> 3proxy

Ambiguous:

foo_bar -> foo_bar, foo-bar

More ambiguous:

foo_bar_baz -> foo_bar_baz, foo-bar-baz, foo_bar-baz, foo-bar_baz

property tag_index

Index of tags and which packages they’re defined on.

exception spack.repo.RepoError(message, long_message=None)[source]

Bases: SpackError

Superclass for repository-related errors.

class spack.repo.RepoIndex(package_checker, namespace, cache)[source]

Bases: object

Container class that manages a set of Indexers for a Repo.

This class is responsible for checking packages in a repository for updates (using FastPackageChecker) and for regenerating indexes when they’re needed.

Indexers should be added to the RepoIndex using add_index(name, indexer), and they should support the interface defined by Indexer, so that the RepoIndex can read, generate, and update stored indices.

Generated indexes are accessed by name via __getitem__().

add_indexer(name, indexer)[source]

Add an indexer to the repo index.

Parameters
  • name (str) – name of this indexer

  • indexer (object) – an object that supports create(), read(), write(), and get_index() operations

class spack.repo.RepoLoader(fullname, repo, package_name)[source]

Bases: _PrependFileLoader

Loads a Python module associated with a package in specific repository

class spack.repo.RepoPath(*repos, **kwargs)[source]

Bases: object

A RepoPath is a list of repos that function as one.

It functions exactly like a Repo, but it operates on the combined results of the Repos in its list instead of on a single package repository.

Parameters

repos (list) – list Repo objects or paths to put in this RepoPath

all_package_classes()[source]
all_package_names(include_virtuals=False)[source]
dirname_for_package_name(pkg_name)[source]
dump_provenance(spec, path)[source]

Dump provenance information for a spec to a particular path.

This dumps the package file and any associated patch files. Raises UnknownPackageError if not found.

exists(pkg_name)[source]

Whether package with the give name exists in the path’s repos.

Note that virtual packages do not “exist”.

extensions_for(extendee_spec)[source]
filename_for_package_name(pkg_name)[source]
first_repo()[source]

Get the first repo in precedence order.

get(spec)[source]

Returns the package associated with the supplied spec.

get_pkg_class(pkg_name)[source]

Find a class for the spec’s package and return the class object.

get_repo(namespace, default=<object object>)[source]

Get a repository by namespace.

Parameters

namespace – Look up this namespace in the RepoPath, and return it if found.

Optional Arguments:

default:

If default is provided, return it when the namespace isn’t found. If not, raise an UnknownNamespaceError.

is_virtual(pkg_name)[source]

Return True if the package with this name is virtual, False otherwise.

This function use the provider index. If calling from a code block that is used to construct the provider index use the is_virtual_safe function.

Parameters

pkg_name (str) – name of the package we want to check

is_virtual_safe(pkg_name)[source]

Return True if the package with this name is virtual, False otherwise.

This function doesn’t use the provider index.

Parameters

pkg_name (str) – name of the package we want to check

last_mtime()[source]

Time a package file in this repo was last updated.

packages_with_tags(*tags)[source]
property patch_index

Merged PatchIndex from all Repos in the RepoPath.

property provider_index

Merged ProviderIndex from all Repos in the RepoPath.

providers_for(vpkg_spec)[source]
put_first(repo)[source]

Add repo first in the search path.

put_last(repo)[source]

Add repo last in the search path.

remove(repo)[source]

Remove a repo from the search path.

repo_for_pkg(spec)[source]

Given a spec, get the repository for its package.

property tag_index

Merged TagIndex from all Repos in the RepoPath.

class spack.repo.ReposFinder[source]

Bases: object

MetaPathFinder class that loads a Python module corresponding to a Spack package

Return a loader based on the inspection of the current global repository list.

compute_loader(fullname)[source]
find_module(fullname, python_path=None)[source]
find_spec(fullname, python_path, target=None)[source]
class spack.repo.SpackNamespace(namespace)[source]

Bases: module

Allow lazy loading of modules.

class spack.repo.SpackNamespaceLoader[source]

Bases: object

create_module(spec)[source]
exec_module(module)[source]
load_module(fullname)[source]
class spack.repo.TagIndexer(repository)[source]

Bases: Indexer

Lifecycle methods for a TagIndex on a Repo.

read(stream)[source]

Read this index from a provided file object.

update(pkg_fullname)[source]

Update the index in memory with information about a package.

write(stream)[source]

Write the index to a file object.

exception spack.repo.UnknownEntityError(message, long_message=None)[source]

Bases: RepoError

Raised when we encounter a package spack doesn’t have.

exception spack.repo.UnknownNamespaceError(namespace, name=None)[source]

Bases: UnknownEntityError

Raised when we encounter an unknown namespace

exception spack.repo.UnknownPackageError(name, repo=None)[source]

Bases: UnknownEntityError

Raised when we encounter a package spack doesn’t have.

spack.repo.add_package_to_git_stage(packages)[source]

add a package to the git stage with git add

spack.repo.all_package_names(include_virtuals=False)[source]

Convenience wrapper around spack.repo.all_package_names().

spack.repo.autospec(function)[source]

Decorator that automatically converts the first argument of a function to a Spec.

spack.repo.create(configuration)[source]

Create a RepoPath from a configuration object.

Parameters

configuration (spack.config.Configuration) – configuration object

spack.repo.create_or_construct(path, namespace=None)[source]

Create a repository, or just return a Repo if it already exists.

spack.repo.create_repo(root, namespace=None)[source]

Create a new repository in root with the specified namespace.

If the namespace is not provided, use basename of root. Return the canonicalized path and namespace of the created repository.

spack.repo.diff_packages(rev1, rev2)[source]

Compute packages lists for the two revisions and return a tuple containing all the packages in rev1 but not in rev2 and all the packages in rev2 but not in rev1.

spack.repo.get_all_package_diffs(type, rev1='HEAD^1', rev2='HEAD')[source]
Show packages changed, added, or removed (or any combination of those)

since a commit.

Parameters
  • type (str) – String containing one or more of ‘A’, ‘B’, ‘C’

  • rev1 (str) – Revision to compare against, default is ‘HEAD^’

  • rev2 (str) – Revision to compare to rev1, default is ‘HEAD’

Returns

A set contain names of affected packages.

spack.repo.get_git()[source]

Get a git executable that runs within the packages path.

spack.repo.is_package_file(filename)[source]

Determine whether we are in a package file from a repo.

spack.repo.list_packages(rev)[source]

List all packages associated with the given revision

spack.repo.namespace_from_fullname(fullname)[source]

Return the repository namespace only for the full module name.

For instance:

namespace_from_fullname(‘spack.pkg.builtin.hdf5’) == ‘builtin’

Parameters

fullname (str) – full name for the Python module

spack.repo.packages_path()[source]

Get the test repo if it is active, otherwise the builtin repo.

spack.repo.path = <spack.repo.RepoPath object>

Singleton repo path instance

spack.repo.python_package_for_repo(namespace)[source]

Returns the full namespace of a repository, given its relative one

For instance:

python_package_for_repo(‘builtin’) == ‘spack.pkg.builtin’

Parameters

namespace (str) – repo namespace

spack.repo.use_repositories(*paths_and_repos, **kwargs)[source]

Use the repositories passed as arguments within the context manager.

Parameters
  • *paths_and_repos – paths to the repositories to be used, or already constructed Repo objects

  • override (bool) – if True use only the repositories passed as input, if False add them to the top of the list of current repositories.

Returns

Corresponding RepoPath object

spack.report module

Tools to produce reports of spec installations

class spack.report.collect_info(cls, function, format_name, args)[source]

Bases: object

Collects information to build a report while installing and dumps it on exit.

If the format name is not None, this context manager decorates PackageInstaller._install_task when entering the context for a PackageBase.do_install operation and unrolls the change when exiting.

Within the context, only the specs that are passed to it on initialization will be recorded for the report. Data from other specs will be discarded.

Examples

# The file 'junit.xml' is written when exiting
# the context
s = [Spec('hdf5').concretized()]
with collect_info(PackageBase, do_install, s, 'junit', 'a.xml'):
    # A report will be generated for these specs...
    for spec in s:
        getattr(class, function)(spec)
    # ...but not for this one
    Spec('zlib').concretized().do_install()
Parameters
  • class – class on which to wrap a function

  • function – function to wrap

  • format_name (str or None) – one of the supported formats

  • args (dict) – args passed to function

Raises

ValueError – when format_name is not in valid_formats

concretization_report(msg)[source]
spack.report.valid_formats = [None, 'junit', 'cdash']

Allowed report formats

spack.reporter module

class spack.reporter.Reporter(args)[source]

Bases: object

Base class for report writers.

build_report(filename, report_data)[source]
concretization_report(filename, msg)[source]
test_report(filename, report_data)[source]

spack.resource module

Describes an optional resource needed for a build.

Typically a bunch of sources that can be built in-tree within another package to enable optional features.

class spack.resource.Resource(name, fetcher, destination, placement)[source]

Bases: object

Represents an optional resource to be fetched by a package.

Aggregates a name, a fetcher, a destination and a placement.

spack.rewiring module

exception spack.rewiring.PackageNotInstalledError(spliced_spec, build_spec, dep)[source]

Bases: RewireError

Raised when the build_spec for a splice was not installed.

exception spack.rewiring.RewireError(message, long_msg=None)[source]

Bases: SpackError

Raised when something goes wrong with rewiring.

spack.rewiring.rewire(spliced_spec)[source]

Given a spliced spec, this function conducts all the rewiring on all nodes in the DAG of that spec.

spack.rewiring.rewire_node(spec, explicit)[source]

This function rewires a single node, worrying only about references to its subgraph. Binaries, text, and links are all changed in accordance with the splice. The resulting package is then ‘installed.’

spack.s3_handler module

class spack.s3_handler.UrllibS3Handler(debuglevel=0, context=None, check_hostname=None)[source]

Bases: HTTPSHandler

s3_open(req)[source]
class spack.s3_handler.WrapStream(raw)[source]

Bases: BufferedReader

detach()[source]

Disconnect this buffer from its underlying raw stream and return it.

After the raw stream has been detached, the buffer is in an unusable state.

read(*args, **kwargs)[source]

Read and return up to n bytes.

If the argument is omitted, None, or negative, reads and returns all data until EOF.

If the argument is positive, and the underlying raw stream is not ‘interactive’, multiple raw reads may be issued to satisfy the byte count (unless EOF is reached first). But for interactive raw streams (as well as sockets and pipes), at most one raw read will be issued, and a short result does not imply that EOF is imminent.

Returns an empty bytes object on EOF.

Returns None if the underlying raw stream was open in non-blocking mode and no data is available at the moment.

spack.spec module

Spack allows very fine-grained control over how packages are installed and over how they are built and configured. To make this easy, it has its own syntax for declaring a dependence. We call a descriptor of a particular package configuration a “spec”.

The syntax looks like this:

$ spack install mpileaks ^openmpi @1.2:1.4 +debug %intel @12.1 target=zen
                0        1        2        3      4      5     6

The first part of this is the command, ‘spack install’. The rest of the line is a spec for a particular installation of the mpileaks package.

  1. The package to install

  2. A dependency of the package, prefixed by ^

  3. A version descriptor for the package. This can either be a specific version, like “1.2”, or it can be a range of versions, e.g. “1.2:1.4”. If multiple specific versions or multiple ranges are acceptable, they can be separated by commas, e.g. if a package will only build with versions 1.0, 1.2-1.4, and 1.6-1.8 of mavpich, you could say:

    depends_on(“mvapich@1.0,1.2:1.4,1.6:1.8”)

  4. A compile-time variant of the package. If you need openmpi to be built in debug mode for your package to work, you can require it by adding +debug to the openmpi spec when you depend on it. If you do NOT want the debug option to be enabled, then replace this with -debug. If you would like for the variant to be propagated through all your package’s dependencies use “++” for enabling and “–” or “~~” for disabling.

  5. The name of the compiler to build with.

  6. The versions of the compiler to build with. Note that the identifier for a compiler version is the same ‘@’ that is used for a package version. A version list denoted by ‘@’ is associated with the compiler only if if it comes immediately after the compiler name. Otherwise it will be associated with the current package spec.

  7. The architecture to build with. This is needed on machines where cross-compilation is required

Here is the EBNF grammar for a spec:

spec-list    = { spec [ dep-list ] }
dep_list     = { ^ spec }
spec         = id [ options ]
options      = { @version-list | ++variant | +variant |
                 --variant | -variant | ~~variant | ~variant |
                 variant=value | variant==value | %compiler |
                 arch=architecture | [ flag ]==value | [ flag ]=value}
flag         = { cflags | cxxflags | fcflags | fflags | cppflags |
                 ldflags | ldlibs }
variant      = id
architecture = id
compiler     = id [ version-list ]
version-list = version [ { , version } ]
version      = id | id: | :id | id:id
id           = [A-Za-z0-9_][A-Za-z0-9_.-]*

Identifiers using the <name>=<value> command, such as architectures and compiler flags, require a space before the name.

There is one context-sensitive part: ids in versions may contain ‘.’, while other ids may not.

There is one ambiguity: since ‘-‘ is allowed in an id, you need to put whitespace space before -variant for it to be tokenized properly. You can either use whitespace, or you can just use ~variant since it means the same thing. Spack uses ~variant in directory names and in the canonical form of specs to avoid ambiguity. Both are provided because ~ can cause shell expansion when it is the first character in an id typed on the command line.

exception spack.spec.AmbiguousHashError(msg, *specs)[source]

Bases: SpecError

exception spack.spec.ArchitecturePropagationError(message, long_message=None)[source]

Bases: SpecError

Raised when the double equal symbols are used to assign the spec’s architecture.

class spack.spec.CompilerSpec(*args)[source]

Bases: object

The CompilerSpec field represents the compiler or range of compiler versions that a package should be built with. CompilerSpecs have a name and a version list.

property concrete

A CompilerSpec is concrete if its versions are concrete and there is an available compiler with the right version.

constrain(other)[source]

Intersect self’s versions with other.

Return whether the CompilerSpec changed.

copy()[source]
static from_dict(d)[source]
name
satisfies(other, strict=False)[source]
to_dict()[source]
property version
versions
exception spack.spec.DuplicateArchitectureError(message, long_message=None)[source]

Bases: SpecError

Raised when the same architecture occurs in a spec twice.

exception spack.spec.DuplicateCompilerSpecError(message, long_message=None)[source]

Bases: SpecError

Raised when the same compiler occurs in a spec twice.

exception spack.spec.DuplicateDependencyError(message, long_message=None)[source]

Bases: SpecError

Raised when the same dependency occurs in a spec twice.

exception spack.spec.InconsistentSpecError(message, long_message=None)[source]

Bases: SpecError

Raised when two nodes in the same spec DAG have inconsistent constraints.

exception spack.spec.InvalidDependencyError(pkg, deps)[source]

Bases: SpecError

Raised when a dependency in a spec is not actually a dependency of the package.

exception spack.spec.InvalidHashError(spec, hash)[source]

Bases: SpecError

exception spack.spec.MultipleProviderError(vpkg, providers)[source]

Bases: SpecError

Raised when there is no package that provides a particular virtual dependency.

exception spack.spec.NoProviderError(vpkg)[source]

Bases: SpecError

Raised when there is no package that provides a particular virtual dependency.

exception spack.spec.NoSuchHashError(hash)[source]

Bases: SpecError

exception spack.spec.RedundantSpecError(spec, addition)[source]

Bases: SpecError

class spack.spec.Spec(spec_like=None, normal=False, concrete=False, external_path=None, external_modules=None)[source]

Bases: object

add_dependency_edge(dependency_spec, deptype)[source]

Add a dependency edge to this spec.

Parameters
  • dependency_spec (Spec) – spec of the dependency

  • deptype (str or tuple) – dependency types

property build_spec
static build_spec_from_node_dict(node, hash_type='hash')[source]
cformat(*args, **kwargs)[source]

Same as format, but color defaults to auto instead of False.

clear_cached_hashes(ignore=())[source]

Clears all cached hashes in a Spec, while preserving other properties.

clear_dependencies()[source]

Trim the dependencies of this spec.

clear_edges()[source]

Trim the dependencies and dependents of this spec.

colorized()[source]
common_dependencies(other)[source]

Return names of dependencies that self an other have in common.

property concrete

A spec is concrete if it describes a single build of a package.

More formally, a spec is concrete if concretize() has been called on it and it has been marked _concrete.

Concrete specs either can be or have been built. All constraints have been resolved, optional dependencies have been added or removed, a compiler has been chosen, and all variants have values.

concretize(tests=False)[source]

Concretize the current spec.

Parameters

tests (bool or list) – if False disregard ‘test’ dependencies, if a list of names activate them for the packages in the list, if True activate ‘test’ dependencies for all packages.

concretized(tests=False)[source]

This is a non-destructive version of concretize().

First clones, then returns a concrete version of this package without modifying this package.

Parameters

tests (bool or list) – if False disregard ‘test’ dependencies, if a list of names activate them for the packages in the list, if True activate ‘test’ dependencies for all packages.

constrain(other, deps=True)[source]

Merge the constraints of other with self.

Returns True if the spec changed as a result, False if not.

constrained(other, deps=True)[source]

Return a constrained copy without modifying this spec.

copy(deps=True, **kwargs)[source]

Make a copy of this spec.

Parameters
  • deps (bool or tuple) – Defaults to True. If boolean, controls whether dependencies are copied (copied if True). If a tuple is provided, only dependencies of types matching those in the tuple are copied.

  • kwargs – additional arguments for internal use (passed to _dup).

Returns

A copy of this spec.

Examples

Deep copy with dependencies:

spec.copy()
spec.copy(deps=True)

Shallow copy (no dependencies):

spec.copy(deps=False)

Only build and run dependencies:

deps=('build', 'run'):
property cshort_spec

Returns an auto-colorized version of self.short_spec.

dag_hash(length=None)[source]

This is Spack’s default hash, used to identify installations.

Same as the full hash (includes package hash and build/link/run deps). Tells us when package files and any dependencies have changes.

NOTE: Versions of Spack prior to 0.18 only included link and run deps.

dag_hash_bit_prefix(bits)[source]

Get the first <bits> bits of the DAG hash as an integer type.

static default_arch()[source]

Return an anonymous spec for the default architecture

dependencies(name=None, deptype='all')[source]

Return a list of direct dependencies (nodes in the DAG).

Parameters
  • name (str) – filter dependencies by package name

  • deptype (str or tuple) – allowed dependency types

static dependencies_from_node_dict(node)[source]
dependents(name=None, deptype='all')[source]

Return a list of direct dependents (nodes in the DAG).

Parameters
  • name (str) – filter dependents by package name

  • deptype (str or tuple) – allowed dependency types

detach(deptype='all')[source]

Remove any reference that dependencies have of this node.

Parameters

deptype (str or tuple) – dependency types tracked by the current spec

direct_dep_difference(other)[source]

Returns dependencies in self that are not in other.

edges_from_dependents(name=None, deptype='all')[source]

Return a list of edges connecting this node in the DAG to parents.

Parameters
  • name (str) – filter dependents by package name

  • deptype (str or tuple) – allowed dependency types

edges_to_dependencies(name=None, deptype='all')[source]

Return a list of edges connecting this node in the DAG to children.

Parameters
  • name (str) – filter dependencies by package name

  • deptype (str or tuple) – allowed dependency types

static ensure_external_path_if_external(external_spec)[source]
static ensure_no_deprecated(root)[source]

Raise if a deprecated spec is in the dag.

Parameters

root (Spec) – root spec to be analyzed

Raises

SpecDeprecatedError – if any deprecated spec is found

static ensure_valid_variants(spec)[source]

Ensures that the variant attached to a spec are valid.

Parameters

spec (Spec) – spec to be analyzed

Raises

spack.variant.UnknownVariantError – on the first unknown variant found

eq_dag(other, deptypes=True, vs=None, vo=None)[source]

True if the full dependency DAGs of specs are equal.

eq_node(other)[source]

Equality with another spec, not including dependencies.

property external
static extract_json_from_clearsig(data)[source]
flat_dependencies(**kwargs)[source]

Return a DependencyMap containing all of this spec’s dependencies with their constraints merged.

If copy is True, returns merged copies of its dependencies without modifying the spec it’s called on.

If copy is False, clears this spec’s dependencies and returns them. This disconnects all dependency links including transitive dependencies, except for concrete specs: if a spec is concrete it will not be disconnected from its dependencies (although a non-concrete spec with concrete dependencies will be disconnected from those dependencies).

format(format_string='{name}{@version}{%compiler.name}{@compiler.version}{compiler_flags}{variants}{arch=architecture}', **kwargs)[source]

Prints out particular pieces of a spec, depending on what is in the format string.

Using the {attribute} syntax, any field of the spec can be selected. Those attributes can be recursive. For example, s.format({compiler.version}) will print the version of the compiler.

Commonly used attributes of the Spec for format strings include:

name
version
compiler
compiler.name
compiler.version
compiler_flags
variants
architecture
architecture.platform
architecture.os
architecture.target
prefix

Some additional special-case properties can be added:

hash[:len]    The DAG hash with optional length argument
spack_root    The spack root directory
spack_install The spack install directory

The ^ sigil can be used to access dependencies by name. s.format({^mpi.name}) will print the name of the MPI implementation in the spec.

The @, %, arch=, and / sigils can be used to include the sigil with the printed string. These sigils may only be used with the appropriate attributes, listed below:

@        ``{@version}``, ``{@compiler.version}``
%        ``{%compiler}``, ``{%compiler.name}``
arch=    ``{arch=architecture}``
/        ``{/hash}``, ``{/hash:7}``, etc

The @ sigil may also be used for any other property named version. Sigils printed with the attribute string are only printed if the attribute string is non-empty, and are colored according to the color of the attribute.

Sigils are not used for printing variants. Variants listed by name naturally print with their sigil. For example, spec.format('{variants.debug}') would print either +debug or ~debug depending on the name of the variant. Non-boolean variants print as name=value. To print variant names or values independently, use spec.format('{variants.<name>.name}') or spec.format('{variants.<name>.value}').

Spec format strings use \ as the escape character. Use \{ and \} for literal braces, and \\ for the literal \ character. Also use \$ for the literal $ to differentiate from previous, deprecated format string syntax.

The previous format strings are deprecated. They can still be accessed by the old_format method. The format method will call old_format if the character $ appears unescaped in the format string.

Parameters

format_string (str) – string containing the format to be expanded

Keyword Arguments
  • color (bool) – True if returned string is colored

  • transform (dict) – maps full-string formats to a callable that accepts a string and returns another one

static from_detection(spec_str, extra_attributes=None)[source]

Construct a spec from a spec string determined during external detection and attach extra attributes to it.

Parameters
  • spec_str (str) – spec string

  • extra_attributes (dict) – dictionary containing extra attributes

Returns

external spec

Return type

spack.spec.Spec

static from_dict(data)[source]

Construct a spec from JSON/YAML.

Parameters

data – a nested dict/list data structure read from YAML or JSON.

static from_json(stream)[source]

Construct a spec from JSON.

Parameters

stream – string or file object to read from.

static from_literal(spec_dict, normal=True)[source]

Builds a Spec from a dictionary containing the spec literal.

The dictionary must have a single top level key, representing the root, and as many secondary level keys as needed in the spec.

The keys can be either a string or a Spec or a tuple containing the Spec and the dependency types.

Parameters
  • spec_dict (dict) – the dictionary containing the spec literal

  • normal (bool) – if True the same key appearing at different levels of the spec_dict will map to the same object in memory.

Examples

A simple spec foo with no dependencies:

{'foo': None}

A spec foo with a (build, link) dependency bar:

{'foo':
    {'bar:build,link': None}}

A spec with a diamond dependency and various build types:

{'dt-diamond': {
    'dt-diamond-left:build,link': {
        'dt-diamond-bottom:build': None
    },
    'dt-diamond-right:build,link': {
        'dt-diamond-bottom:build,link,run': None
    }
}}

The same spec with a double copy of dt-diamond-bottom and no diamond structure:

{'dt-diamond': {
    'dt-diamond-left:build,link': {
        'dt-diamond-bottom:build': None
    },
    'dt-diamond-right:build,link': {
        'dt-diamond-bottom:build,link,run': None
    }
}, normal=False}

Constructing a spec using a Spec object as key:

mpich = Spec('mpich')
libelf = Spec('libelf@1.8.11')
expected_normalized = Spec.from_literal({
    'mpileaks': {
        'callpath': {
            'dyninst': {
                'libdwarf': {libelf: None},
                libelf: None
            },
            mpich: None
        },
        mpich: None
    },
})
static from_node_dict(node)[source]
static from_signed_json(stream)[source]

Construct a spec from clearsigned json spec file.

Parameters

stream – string or file object to read from.

static from_specfile(path)[source]

Construct a spec from aJSON or YAML spec file path

static from_yaml(stream)[source]

Construct a spec from YAML.

Parameters

stream – string or file object to read from.

property fullname
index(deptype='all')[source]

Return a dictionary that points to all the dependencies in this spec.

static inject_patches_variant(root)[source]
install_status()[source]

Helper for tree to print DB install status.

property installed

Installation status of a package.

Returns

True if the package has been installed, False otherwise.

property installed_upstream

Whether the spec is installed in an upstream repository.

Returns

True if the package is installed in an upstream, False otherwise.

node_dict_with_hashes(hash=<spack.hash_types.SpecHashDescriptor object>)[source]

Returns a node_dict of this spec with the dag hash added. If this spec is concrete, the full hash is added as well. If ‘build’ is in the hash_type, the build hash is also added.

normalize(force=False, tests=False, user_spec_deps=None)[source]

When specs are parsed, any dependencies specified are hanging off the root, and ONLY the ones that were explicitly provided are there. Normalization turns a partial flat spec into a DAG, where:

  1. Known dependencies of the root package are in the DAG.

  2. Each node’s dependencies dict only contains its known direct deps.

  3. There is only ONE unique spec for each package in the DAG.

    • This includes virtual packages. If there a non-virtual package that provides a virtual package that is in the spec, then we replace the virtual package with the non-virtual one.

TODO: normalize should probably implement some form of cycle detection, to ensure that the spec is actually a DAG.

normalized()[source]

Return a normalized copy of this spec without modifying this spec.

old_format(format_string='$_$@$%@+$+$=', **kwargs)[source]

The format strings you can provide are:

$_   Package name
$.   Full package name (with namespace)
$@   Version with '@' prefix
$%   Compiler with '%' prefix
$%@  Compiler with '%' prefix & compiler version with '@' prefix
$%+  Compiler with '%' prefix & compiler flags prefixed by name
$%@+ Compiler, compiler version, and compiler flags with same
     prefixes as above
$+   Options
$=   Architecture prefixed by 'arch='
$/   7-char prefix of DAG hash with '-' prefix
$$   $

You can also use full-string versions, which elide the prefixes:

${PACKAGE}       Package name
${FULLPACKAGE}   Full package name (with namespace)
${VERSION}       Version
${COMPILER}      Full compiler string
${COMPILERNAME}  Compiler name
${COMPILERVER}   Compiler version
${COMPILERFLAGS} Compiler flags
${OPTIONS}       Options
${ARCHITECTURE}  Architecture
${PLATFORM}      Platform
${OS}            Operating System
${TARGET}        Target
${SHA1}          Dependencies 8-char sha1 prefix
${HASH:len}      DAG hash with optional length specifier

${DEP:name:OPTION} Evaluates as OPTION would for self['name']

${SPACK_ROOT}    The spack root directory
${SPACK_INSTALL} The default spack install directory,
                 ${SPACK_PREFIX}/opt
${PREFIX}        The package prefix
${NAMESPACE}     The package namespace

Note these are case-insensitive: for example you can specify either ${PACKAGE} or ${package}.

Optionally you can provide a width, e.g. $20_ for a 20-wide name. Like printf, you can provide ‘-‘ for left justification, e.g. $-20_ for a left-justified name.

Anything else is copied verbatim into the output stream.

Parameters

format_string (str) – string containing the format to be expanded

Keyword Arguments
  • color (bool) – True if returned string is colored

  • transform (dict) – maps full-string formats to a callable that accepts a string and returns another one

Examples

The following line:

s = spec.format('$_$@$+')

translates to the name, version, and options of the package, but no dependencies, arch, or compiler.

TODO: allow, e.g., $6# to customize short hash length TODO: allow, e.g., $// for full hash.

property os
static override(init_spec, change_spec)[source]
property package
property package_class

Internal package call gets only the class object for a package. Use this to just get package metadata.

package_hash()[source]

Compute the hash of the contents of the package for this node

property patches

Return patch objects for any patch sha256 sums on this Spec.

This is for use after concretization to iterate over any patches associated with this spec.

TODO: this only checks in the package; it doesn’t resurrect old patches from install directories, but it probably should.

property platform
property prefix
process_hash(length=None)[source]

Hash used to transfer specs among processes.

This hash includes build and test dependencies and is only used to serialize a spec and pass it around among processes.

process_hash_bit_prefix(bits)[source]

Get the first <bits> bits of the DAG hash as an integer type.

static read_yaml_dep_specs(deps, hash_type='hash')[source]

Read the DependencySpec portion of a YAML-formatted Spec. This needs to be backward-compatible with older spack spec formats so that reindex will work on old specs/databases.

property root

Follow dependent links and find the root of this spec’s DAG.

Spack specs have a single root (the package being installed).

satisfies(other, deps=True, strict=False, strict_deps=False)[source]

Determine if this spec satisfies all constraints of another.

There are two senses for satisfies:

  • loose (default): the absence of a constraint in self implies that it could be satisfied by other, so we only check that there are no conflicts with other for constraints that this spec actually has.

  • strict: strict means that we must meet all the constraints specified on other.

satisfies_dependencies(other, strict=False)[source]

This checks constraints on common dependencies against each other.

property short_spec

Returns a version of the spec with the dependencies hashed instead of completely enumerated.

spec_hash(hash)[source]

Utility method for computing different types of Spec hashes.

Parameters

hash (spack.hash_types.SpecHashDescriptor) – type of hash to generate.

splice(other, transitive)[source]

Splices dependency “other” into this (“target”) Spec, and return the result as a concrete Spec. If transitive, then other and its dependencies will be extrapolated to a list of Specs and spliced in accordingly. For example, let there exist a dependency graph as follows: T | Z<-H In this example, Spec T depends on H and Z, and H also depends on Z. Suppose, however, that we wish to use a different H, known as H’. This function will splice in the new H’ in one of two ways: 1. transitively, where H’ depends on the Z’ it was built with, and the new T* also directly depends on this new Z’, or 2. intransitively, where the new T* and H’ both depend on the original Z. Since the Spec returned by this splicing function is no longer deployed the same way it was built, any such changes are tracked by setting the build_spec to point to the corresponding dependency from the original Spec. TODO: Extend this for non-concrete Specs.

property spliced

Returns whether or not this Spec is being deployed as built i.e. whether or not this Spec has ever been spliced.

property target
to_dict(hash=<spack.hash_types.SpecHashDescriptor object>)[source]

Create a dictionary suitable for writing this spec to YAML or JSON.

This dictionaries like the one that is ultimately written to a spec.json file in each Spack installation directory. For example, for sqlite:

{
"spec": {
    "_meta": {
    "version": 2
    },
    "nodes": [
    {
        "name": "sqlite",
        "version": "3.34.0",
        "arch": {
        "platform": "darwin",
        "platform_os": "catalina",
        "target": "x86_64"
        },
        "compiler": {
        "name": "apple-clang",
        "version": "11.0.0"
        },
        "namespace": "builtin",
        "parameters": {
        "column_metadata": true,
        "fts": true,
        "functions": false,
        "rtree": false,
        "cflags": [],
        "cppflags": [],
        "cxxflags": [],
        "fflags": [],
        "ldflags": [],
        "ldlibs": []
        },
        "dependencies": [
        {
            "name": "readline",
            "hash": "4f47cggum7p4qmp3xna4hi547o66unva",
            "type": [
            "build",
            "link"
            ]
        },
        {
            "name": "zlib",
            "hash": "uvgh6p7rhll4kexqnr47bvqxb3t33jtq",
            "type": [
            "build",
            "link"
            ]
        }
        ],
        "hash": "tve45xfqkfgmzwcyfetze2z6syrg7eaf",
    },
        # ... more node dicts for readline and its dependencies ...
    ]
}

Note that this dictionary starts with the ‘spec’ key, and what follows is a list starting with the root spec, followed by its dependencies in preorder. Each node in the list also has a ‘hash’ key that contains the hash of the node without the hash field included.

In the example, the package content hash is not included in the spec, but if package_hash were true there would be an additional field on each node called package_hash.

from_dict() can be used to read back in a spec that has been converted to a dictionary, serialized, and read back in.

Parameters
  • deptype (tuple or str) – dependency types to include when traversing the spec.

  • package_hash (bool) – whether to include package content hashes in the dictionary.

to_json(stream=None, hash=<spack.hash_types.SpecHashDescriptor object>)[source]
to_node_dict(hash=<spack.hash_types.SpecHashDescriptor object>)[source]

Create a dictionary representing the state of this Spec.

to_node_dict creates the content that is eventually hashed by Spack to create identifiers like the DAG hash (see dag_hash()). Example result of to_node_dict for the sqlite package:

{
    'sqlite': {
        'version': '3.28.0',
        'arch': {
            'platform': 'darwin',
            'platform_os': 'mojave',
            'target': 'x86_64',
        },
        'compiler': {
            'name': 'apple-clang',
            'version': '10.0.0',
        },
        'namespace': 'builtin',
        'parameters': {
            'fts': 'true',
            'functions': 'false',
            'cflags': [],
            'cppflags': [],
            'cxxflags': [],
            'fflags': [],
            'ldflags': [],
            'ldlibs': [],
        },
        'dependencies': {
            'readline': {
                'hash': 'zvaa4lhlhilypw5quj3akyd3apbq5gap',
                'type': ['build', 'link'],
            }
        },
    }
}

Note that the dictionary returned does not include the hash of the root of the spec, though it does include hashes for each dependency, and (optionally) the package file corresponding to each node.

See to_dict() for a “complete” spec hash, with hashes for each node and nodes for each dependency (instead of just their hashes).

Parameters

hash (spack.hash_types.SpecHashDescriptor) –

to_yaml(stream=None, hash=<spack.hash_types.SpecHashDescriptor object>)[source]
traverse(**kwargs)[source]

Shorthand for traverse_nodes()

traverse_edges(**kwargs)[source]

Shorthand for traverse_edges()

tree(**kwargs)[source]

Prints out this spec and its dependencies, tree-formatted with indentation.

update_variant_validate(variant_name, values)[source]

If it is not already there, adds the variant named variant_name to the spec spec based on the definition contained in the package metadata. Validates the variant and values before returning.

Used to add values to a variant without being sensitive to the variant being single or multi-valued. If the variant already exists on the spec it is assumed to be multi-valued and the values are appended.

Parameters
  • variant_name – the name of the variant to add or append to

  • values – the value or values (as a tuple) to add/append to the variant

validate_detection()[source]

Validate the detection of an external spec.

This method is used as part of Spack’s detection protocol, and is not meant for client code use.

validate_or_raise()[source]

Checks that names and values in this spec are real. If they’re not, it will raise an appropriate exception.

property version
property virtual
virtual_dependencies()[source]

Return list of any virtual deps in this spec.

exception spack.spec.SpecDeprecatedError(message, long_message=None)[source]

Bases: SpecError

Raised when a spec concretizes to a deprecated spec or dependency.

exception spack.spec.SpecParseError(parse_error)[source]

Bases: SpecError

Wrapper for ParseError for when we’re parsing specs.

property long_message
class spack.spec.SpecParser(initial_spec=None)[source]

Bases: Parser

Parses specs.

check_identifier(id=None)[source]

The only identifiers that can contain ‘.’ are versions, but version ids are context-sensitive so we have to check on a case-by-case basis. Call this if we detect a version id where it shouldn’t be.

compiler()[source]
do_parse()[source]
parse_compiler(text)[source]
previous
spec(name)[source]

Parse a spec out of the input. If a spec is supplied, initialize and return it instead of creating a new one.

spec_by_hash()[source]
spec_from_file()[source]

Read a spec from a filename parsed on the input stream.

There is some care taken here to ensure that filenames are a last resort, and that any valid package name is parsed as a name before we consider it as a file. Specs are used in lots of places; we don’t want the parser touching the filesystem unnecessarily.

The parse logic is as follows:

  1. We require that filenames end in .yaml, which means that no valid filename can be interpreted as a hash (hashes can’t have ‘.’)

  2. We avoid treating paths like /path/to/spec.json as hashes, or paths like subdir/spec.json as ids by lexing filenames before hashes.

  3. For spec names that match file and id regexes, like ‘builtin.yaml’, we backtrack from spec_from_file() and treat them as spec names.

variant(name=None)[source]
version()[source]
version_list()[source]
exception spack.spec.UnsatisfiableArchitectureSpecError(provided, required)[source]

Bases: UnsatisfiableSpecError

Raised when a spec architecture conflicts with package constraints.

exception spack.spec.UnsatisfiableCompilerFlagSpecError(provided, required)[source]

Bases: UnsatisfiableSpecError

Raised when a spec variant conflicts with package constraints.

exception spack.spec.UnsatisfiableCompilerSpecError(provided, required)[source]

Bases: UnsatisfiableSpecError

Raised when a spec comiler conflicts with package constraints.

exception spack.spec.UnsatisfiableDependencySpecError(provided, required)[source]

Bases: UnsatisfiableSpecError

Raised when some dependency of constrained specs are incompatible

exception spack.spec.UnsatisfiableProviderSpecError(provided, required)[source]

Bases: UnsatisfiableSpecError

Raised when a provider is supplied but constraints don’t match a vpkg requirement

exception spack.spec.UnsatisfiableSpecNameError(provided, required)[source]

Bases: UnsatisfiableSpecError

Raised when two specs aren’t even for the same package.

exception spack.spec.UnsatisfiableVersionSpecError(provided, required)[source]

Bases: UnsatisfiableSpecError

Raised when a spec version conflicts with package constraints.

exception spack.spec.UnsupportedCompilerError(compiler_name)[source]

Bases: SpecError

Raised when the user asks for a compiler spack doesn’t know about.

spack.spec.parse(string)[source]

Returns a list of specs from an input string. For creating one spec, see Spec() constructor.

spack.spec_list module

exception spack.spec_list.InvalidSpecConstraintError(message, long_message=None)[source]

Bases: SpecListError

Error class for invalid spec constraints at concretize time.

class spack.spec_list.SpecList(name='specs', yaml_list=None, reference=None)[source]

Bases: object

add(spec)[source]
extend(other, copy_reference=True)[source]
property is_matrix
remove(spec)[source]
property specs
property specs_as_constraints
property specs_as_yaml_list
update_reference(reference)[source]
exception spack.spec_list.SpecListError(message, long_message=None)[source]

Bases: SpackError

Error class for all errors related to SpecList objects.

exception spack.spec_list.UndefinedReferenceError(message, long_message=None)[source]

Bases: SpecListError

Error class for undefined references in Spack stacks.

spack.stage module

class spack.stage.DIYStage(path)[source]

Bases: object

Simple class that allows any directory to be a spack stage. Consequently, it does not expect or require that the source path adhere to the standard directory naming convention.

cache_local()[source]
check()[source]
create()[source]
destroy()[source]
expand_archive()[source]
property expanded

Returns True since the source_path must exist.

fetch(*args, **kwargs)[source]
managed_by_spack = False
restage()[source]
class spack.stage.ResourceStage(url_or_fetch_strategy, root, resource, **kwargs)[source]

Bases: Stage

expand_archive()[source]

Changes to the stage directory and attempt to expand the downloaded archive. Fail if the stage is not set up or if the archive is not yet downloaded.

restage()[source]

Removes the expanded archive path if it exists, then re-expands the archive.

exception spack.stage.RestageError(message, long_message=None)[source]

Bases: StageError

“Error encountered during restaging.

class spack.stage.Stage(url_or_fetch_strategy, name=None, mirror_paths=None, keep=False, path=None, lock=True, search_fn=None)[source]

Bases: object

Manages a temporary stage directory for building.

A Stage object is a context manager that handles a directory where some source code is downloaded and built before being installed. It handles fetching the source code, either as an archive to be expanded or by checking it out of a repository. A stage’s lifecycle looks like this:

with Stage() as stage:      # Context manager creates and destroys the
                            # stage directory
    stage.fetch()           # Fetch a source archive into the stage.
    stage.expand_archive()  # Expand the archive into source_path.
    <install>               # Build and install the archive.
                            # (handled by user of Stage)

When used as a context manager, the stage is automatically destroyed if no exception is raised by the context. If an excpetion is raised, the stage is left in the filesystem and NOT destroyed, for potential reuse later.

You can also use the stage’s create/destroy functions manually, like this:

stage = Stage()
try:
    stage.create()          # Explicitly create the stage directory.
    stage.fetch()           # Fetch a source archive into the stage.
    stage.expand_archive()  # Expand the archive into source_path.
    <install>               # Build and install the archive.
                            # (handled by user of Stage)
finally:
    stage.destroy()         # Explicitly destroy the stage directory.

There are two kinds of stages: named and unnamed. Named stages can persist between runs of spack, e.g. if you fetched a tarball but didn’t finish building it, you won’t have to fetch it again.

Unnamed stages are created using standard mkdtemp mechanisms or similar, and are intended to persist for only one run of spack.

property archive_file

Path to the source archive within this stage directory.

cache_local()[source]
cache_mirror(mirror, stats)[source]

Perform a fetch if the resource is not already cached

Parameters
check()[source]

Check the downloaded archive against a checksum digest. No-op if this stage checks code out of a repository.

create()[source]

Ensures the top-level (config:build_stage) directory exists.

destroy()[source]

Removes this stage directory.

expand_archive()[source]

Changes to the stage directory and attempt to expand the downloaded archive. Fail if the stage is not set up or if the archive is not yet downloaded.

property expanded

Returns True if source path expanded; else False.

property expected_archive_files

Possible archive file paths.

fetch(mirror_only=False, err_msg=None)[source]

Retrieves the code or archive

Parameters
  • mirror_only (bool) – only fetch from a mirror

  • err_msg (str or None) – the error message to display if all fetchers fail or None for the default fetch failure message

managed_by_spack = True
restage()[source]

Removes the expanded archive path if it exists, then re-expands the archive.

property save_filename
property source_path

Returns the well-known source directory path.

stage_locks = {}

Most staging is managed by Spack. DIYStage is one exception.

steal_source(dest)[source]

Copy the source_path directory in its entirety to directory dest

This operation creates/fetches/expands the stage if it is not already, and destroys the stage when it is done.

class spack.stage.StageComposite[source]

Bases: Composite

Composite for Stage type objects. The first item in this composite is considered to be the root package, and operations that return a value are forwarded to it.

property archive_file
property expanded
property path
property source_path
exception spack.stage.StageError(message, long_message=None)[source]

Bases: SpackError

“Superclass for all errors encountered during staging.

exception spack.stage.StagePathError(message, long_message=None)[source]

Bases: StageError

“Error encountered with stage path.

exception spack.stage.VersionFetchError(message, long_message=None)[source]

Bases: StageError

Raised when we can’t determine a URL to fetch a package.

spack.stage.create_stage_root(path)[source]

Create the stage root directory and ensure appropriate access perms.

spack.stage.ensure_access(file)[source]

Ensure we can access a directory and die with an error if we can’t.

spack.stage.get_checksums_for_versions(url_dict, name, **kwargs)[source]

Fetches and checksums archives from URLs.

This function is called by both spack checksum and spack create. The first_stage_function argument allows the caller to inspect the first downloaded archive, e.g., to determine the build system.

Parameters
  • url_dict (dict) – A dictionary of the form: version -> URL

  • name (str) – The name of the package

  • first_stage_function (Callable) – function that takes a Stage and a URL; this is run on the stage of the first URL downloaded

  • keep_stage (bool) – whether to keep staging area when command completes

  • batch (bool) – whether to ask user how many versions to fetch (false) or fetch all versions (true)

  • latest (bool) – whether to take the latest version (true) or all (false)

  • fetch_options (dict) – Options used for the fetcher (such as timeout or cookies)

Returns

A multi-line string containing versions and corresponding hashes

Return type

(str)

spack.stage.get_stage_root()[source]
spack.stage.purge()[source]

Remove all build directories in the top-level stage path.

spack.store module

Components that manage Spack’s installation tree.

An install tree, or “build store” consists of two parts:

  1. A package database that tracks what is installed.

  2. A directory layout that determines how the installations are laid out.

The store contains all the install prefixes for packages installed by Spack. The simplest store could just contain prefixes named by DAG hash, but we use a fancier directory layout to make browsing the store and debugging easier.

exception spack.store.MatchError(message, long_message=None)[source]

Bases: SpackError

Error occurring when trying to match specs in store against a constraint

class spack.store.Store(root, unpadded_root=None, projections=None, hash_length=None)[source]

Bases: object

A store is a path full of installed Spack packages.

Stores consist of packages installed according to a DirectoryLayout, along with an index, or _database_ of their contents. The directory layout controls what paths look like and how Spack ensures that each uniqe spec gets its own unique directory (or not, though we don’t recommend that). The database is a signle file that caches metadata for the entire Spack installation. It prevents us from having to spider the install tree to figure out what’s there.

Parameters
  • root (str) – path to the root of the install tree

  • unpadded_root (str) – path to the root of the install tree without padding; the sbang script has to be installed here to work with padded roots

  • path_scheme (str) – expression according to guidelines in spack.util.path that describes how to construct a path to a package prefix in this store

  • hash_length (int) – length of the hashes used in the directory layout; spec hash suffixes will be truncated to this length

static deserialize(token)[source]

Return a store reconstructed from a token created by the serialize method.

Parameters

token – return value of the serialize method

Returns

Store object reconstructed from the token

reindex()[source]

Convenience function to reindex the store DB with its own layout.

serialize()[source]

Return a pickle-able object that can be used to reconstruct a store.

spack.store.default_install_tree_root = '/home/docs/checkouts/readthedocs.org/user_builds/spack/checkouts/v0.19.1/lib/spack/docs/_spack_root/opt/spack'

default installation root, relative to the Spack install path

spack.store.find(constraints, multiple=False, query_fn=None, **kwargs)[source]

Return a list of specs matching the constraints passed as inputs.

At least one spec per constraint must match, otherwise the function will error with an appropriate message.

By default, this function queries the current store, but a custom query function can be passed to hit any other source of concretized specs (e.g. a binary cache).

The query function must accept a spec as its first argument.

Parameters
  • constraints (List[spack.spec.Spec]) – specs to be matched against installed packages

  • multiple (bool) – if True multiple matches per constraint are admitted

  • query_fn (Callable) – query function to get matching specs. By default, spack.store.db.query

  • **kwargs – keyword arguments forwarded to the query function

Returns

List of matching specs

spack.store.parse_install_tree(config_dict)[source]

Parse config settings and return values relevant to the store object.

Parameters

config_dict (dict) – dictionary of config values, as returned from spack.config.get(‘config’)

Returns

triple of the install tree root, the unpadded install tree

root (before padding was applied), and the projections for the install tree

Return type

(tuple)

Encapsulate backwards compatibility capabilities for install_tree and deprecated values that are now parsed as part of install_tree.

spack.store.reinitialize()[source]

Restore globals to the same state they would have at start-up. Return a token containing the state of the store before reinitialization.

spack.store.restore(token)[source]

Restore the environment from a token returned by reinitialize

spack.store.retrieve_upstream_dbs()[source]
spack.store.specfile_matches(filename, **kwargs)[source]

Same as find but reads the query from a spec file.

Parameters
  • filename (str) – YAML or JSON file from which to read the query.

  • **kwargs – keyword arguments forwarded to “find”

Returns

List of matches

spack.store.store = <spack.store.Store object>

Singleton store instance

spack.store.use_store(store_or_path)[source]

Use the store passed as argument within the context manager.

Parameters

store_or_path – either a Store object ot a path to where the store resides

Returns

Store object associated with the context manager’s store

spack.subprocess_context module

This module handles transmission of Spack state to child processes started using the ‘spawn’ start method. Notably, installations are performed in a subprocess and require transmitting the Package object (in such a way that the repository is available for importing when it is deserialized); installations performed in Spack unit tests may include additional modifications to global state in memory that must be replicated in the child process.

class spack.subprocess_context.PackageInstallContext(pkg)[source]

Bases: object

Captures the in-memory process state of a package installation that needs to be transmitted to a child process.

restore()[source]
class spack.subprocess_context.SpackTestProcess(fn)[source]

Bases: object

create()[source]
class spack.subprocess_context.TestPatches(module_patches, class_patches)[source]

Bases: object

restore()[source]
class spack.subprocess_context.TestState[source]

Bases: object

Spack tests may modify state that is normally read from disk in memory; this object is responsible for properly serializing that state to be applied to a subprocess. This isn’t needed outside of a testing environment but this logic is designed to behave the same inside or outside of tests.

restore()[source]
spack.subprocess_context.append_patch(patch)[source]
spack.subprocess_context.clear_patches()[source]
spack.subprocess_context.serialize(obj)[source]
spack.subprocess_context.store_patches()[source]

spack.tag module

Classes and functions to manage package tags

class spack.tag.TagIndex(repository)[source]

Bases: Mapping

Maps tags to list of packages.

copy()[source]

Return a deep copy of this index.

static from_json(stream, repository)[source]
get_packages(tag)[source]

Returns all packages associated with the tag.

merge(other)[source]

Merge another tag index into this one.

Parameters

other (TagIndex) – tag index to be merged

property tags
to_json(stream)[source]
update_package(pkg_name)[source]

Updates a package in the tag index.

Parameters

pkg_name (str) – name of the package to be removed from the index

exception spack.tag.TagIndexError(message, long_message=None)[source]

Bases: SpackError

Raised when there is a problem with a TagIndex.

spack.tag.packages_with_tags(tags, installed, skip_empty)[source]

Returns a dict, indexed by tag, containing lists of names of packages containing the tag or, if no tags, for all available tags.

Parameters
  • tags (list or None) – list of tags of interest or None for all

  • installed (bool) – True if want names of packages that are installed; otherwise, False if want all packages with the tag

  • skip_empty (bool) – True if exclude tags with no associated packages; otherwise, False if want entries for all tags even when no such tagged packages

spack.target module

class spack.target.Target(name, module_name=None)[source]

Bases: object

static from_dict_or_value(dict_or_value)[source]
property name
optimization_flags(compiler)[source]

Returns the flags needed to optimize for this target using the compiler passed as argument.

Parameters

compiler (spack.spec.CompilerSpec or spack.compiler.Compiler) – object that contains both the name and the version of the compiler we want to use

to_dict_or_value()[source]

Returns a dict or a value representing the current target.

String values are used to keep backward compatibility with generic targets, like e.g. x86_64 or ppc64. More specific micro-architectures will return a dictionary which contains information on the name, features, vendor, generation and parents of the current target.

spack.tengine module

class spack.tengine.Context[source]

Bases: object

Base class for context classes that are used with the template engine.

context_properties = []
to_dict()[source]

Returns a dictionary containing all the context properties.

class spack.tengine.ContextMeta(name, bases, attr_dict)[source]

Bases: type

Meta class for Context. It helps reducing the boilerplate in client code.

classmethod context_property(func)[source]

Decorator that adds a function name to the list of new context properties, and then returns a property.

spack.tengine.context_property(func)

A saner way to use the decorator

spack.tengine.make_environment(dirs=None)[source]

Returns an configured environment for template rendering.

spack.tengine.prepend_to_line(text, token)[source]

Prepends a token to each line in text

spack.tengine.quote(text)[source]

Quotes each line in text

spack.traverse module

spack.traverse.traverse_edges(specs, root=True, order='pre', cover='nodes', direction='children', deptype='all', depth=False, key=<built-in function id>, visited=None)[source]

Generator that yields edges from the DAG, starting from a list of root specs.

Parameters
  • specs (list) – List of root specs (considered to be depth 0)

  • root (bool) – Yield the root nodes themselves

  • order (str) – What order of traversal to use in the DAG. For depth-first search this can be pre or post. For BFS this should be breadth.

  • cover (str) – Determines how extensively to cover the dag. Possible values: nodes – Visit each unique node in the dag only once. edges – If a node has been visited once but is reached along a new path, it’s accepted, but not recurisvely followed. This traverses each ‘edge’ in the DAG once. paths – Explore every unique path reachable from the root. This descends into visited subtrees and will accept nodes multiple times if they’re reachable by multiple paths.

  • direction (str) – children or parents. If children, does a traversal of this spec’s children. If parents, traverses upwards in the DAG towards the root.

  • deptype (str or tuple) – allowed dependency types

  • depth (bool) – When False, yield just edges. When True yield the tuple (depth, edge), where depth corresponds to the depth at which edge.spec was discovered.

  • key – function that takes a spec and outputs a key for uniqueness test.

  • visited (set or None) – a set of nodes not to follow

Returns

A generator that yields DependencySpec if depth is False or a tuple of (depth, DependencySpec) if depth is True.

spack.traverse.traverse_nodes(specs, root=True, order='pre', cover='nodes', direction='children', deptype='all', depth=False, key=<built-in function id>, visited=None)[source]

Generator that yields specs from the DAG, starting from a list of root specs.

Parameters
  • specs (list) – List of root specs (considered to be depth 0)

  • root (bool) – Yield the root nodes themselves

  • order (str) – What order of traversal to use in the DAG. For depth-first search this can be pre or post. For BFS this should be breadth.

  • cover (str) – Determines how extensively to cover the dag. Possible values: nodes – Visit each unique node in the dag only once. edges – If a node has been visited once but is reached along a new path, it’s accepted, but not recurisvely followed. This traverses each ‘edge’ in the DAG once. paths – Explore every unique path reachable from the root. This descends into visited subtrees and will accept nodes multiple times if they’re reachable by multiple paths.

  • direction (str) – children or parents. If children, does a traversal of this spec’s children. If parents, traverses upwards in the DAG towards the root.

  • deptype (str or tuple) – allowed dependency types

  • depth (bool) – When False, yield just edges. When True yield the tuple (depth, edge), where depth corresponds to the depth at which edge.spec was discovered.

  • key – function that takes a spec and outputs a key for uniqueness test.

  • visited (set or None) – a set of nodes not to follow

Yields

By default Spec, or a tuple (depth, Spec) if depth is set to True.

spack.traverse.traverse_tree(specs, cover='nodes', deptype='all', key=<built-in function id>, depth_first=True)[source]

Generator that yields (depth, DependencySpec) tuples in the depth-first pre-order, so that a tree can be printed from it.

Parameters
  • specs (list) – List of root specs (considered to be depth 0)

  • cover (str) – Determines how extensively to cover the dag. Possible values: nodes – Visit each unique node in the dag only once. edges – If a node has been visited once but is reached along a new path, it’s accepted, but not recurisvely followed. This traverses each ‘edge’ in the DAG once. paths – Explore every unique path reachable from the root. This descends into visited subtrees and will accept nodes multiple times if they’re reachable by multiple paths.

  • deptype (str or tuple) – allowed dependency types

  • key – function that takes a spec and outputs a key for uniqueness test.

  • depth_first (bool) – Explore the tree in depth-first or breadth-first order. When setting depth_first=True and cover=nodes, each spec only occurs once at the shallowest level, which is useful when rendering the tree in a terminal.

Returns

A generator that yields (depth, DependencySpec) tuples in such an order that a tree can be printed.

spack.url module

This module has methods for parsing names and versions of packages from URLs. The idea is to allow package creators to supply nothing more than the download location of the package, and figure out version and name information from there.

Example: when spack is given the following URL:

It can figure out that the package name is hdf, and that it is at version 4.2.12. This is useful for making the creation of packages simple: a user just supplies a URL and skeleton code is generated automatically.

Spack can also figure out that it can most likely download 4.2.6 at this URL:

This is useful if a user asks for a package at a particular version number; spack doesn’t need anyone to tell it where to get the tarball even though it’s never been told about that version before.

exception spack.url.UndetectableNameError(path)[source]

Bases: UrlParseError

Raised when we can’t parse a package name from a string.

exception spack.url.UndetectableVersionError(path)[source]

Bases: UrlParseError

Raised when we can’t parse a version from a string.

exception spack.url.UrlParseError(msg, path)[source]

Bases: SpackError

Raised when the URL module can’t parse something correctly.

spack.url.color_url(path, **kwargs)[source]

Color the parts of the url according to Spack’s parsing.

Colors are:
Cyan: The version found by parse_version_offset().
Red: The name found by parse_name_offset().
Green: Instances of version string from substitute_version().
Magenta: Instances of the name (protected from substitution).
Parameters
  • path (str) – The filename or URL for the package

  • errors (bool) – Append parse errors at end of string.

  • subs (bool) – Color substitutions as well as parsed name/version.

spack.url.cumsum(elts, init=0, fn=<function <lambda>>)[source]

Return cumulative sum of result of fn on each element in elts.

spack.url.determine_url_file_extension(path)[source]

This returns the type of archive a URL refers to. This is sometimes confusing because of URLs like:

  1. https://github.com/petdance/ack/tarball/1.93_02

Where the URL doesn’t actually contain the filename. We need to know what type it is so that we can appropriately name files in mirrors.

spack.url.find_all(substring, string)[source]

Returns a list containing the indices of every occurrence of substring in string.

spack.url.find_list_urls(url)[source]

Find good list URLs for the supplied URL.

By default, returns the dirname of the archive path.

Provides special treatment for the following websites, which have a unique list URL different from the dirname of the download URL:

GitHub

https://github.com/<repo>/<name>/releases

GitLab

https://gitlab.*/<repo>/<name>/tags

BitBucket

https://bitbucket.org/<repo>/<name>/downloads/?tab=tags

CRAN

https://*.r-project.org/src/contrib/Archive/<name>

PyPI

https://pypi.org/simple/<name>/

LuaRocks

https://luarocks.org/modules/<repo>/<name>

Note: this function is called by spack versions, spack checksum, and spack create, but not by spack fetch or spack install.

Parameters

url (str) – The download URL for the package

Returns

One or more list URLs for the package

Return type

set

spack.url.insensitize(string)[source]

Change upper and lowercase letters to be case insensitive in the provided string. e.g., ‘a’ becomes ‘[Aa]’, ‘B’ becomes ‘[bB]’, etc. Use for building regexes.

spack.url.parse_name(path, ver=None)[source]

Try to determine the name of a package from its filename or URL.

Parameters
  • path (str) – The filename or URL for the package

  • ver (str) – The version of the package

Returns

The name of the package

Return type

str

Raises

UndetectableNameError – If the URL does not match any regexes

spack.url.parse_name_and_version(path)[source]

Try to determine the name of a package and extract its version from its filename or URL.

Parameters

path (str) – The filename or URL for the package

Returns

a tuple containing the package (name, version)

Return type

tuple

Raises
spack.url.parse_name_offset(path, v=None)[source]

Try to determine the name of a package from its filename or URL.

Parameters
  • path (str) – The filename or URL for the package

  • v (str) – The version of the package

Returns

A tuple containing:

name of the package, first index of name, length of name, the index of the matching regex, the matching regex

Return type

tuple

Raises

UndetectableNameError – If the URL does not match any regexes

spack.url.parse_version(path)[source]

Try to extract a version string from a filename or URL.

Parameters

path (str) – The filename or URL for the package

Returns

The version of the package

Return type

spack.version.Version

Raises

UndetectableVersionError – If the URL does not match any regexes

spack.url.parse_version_offset(path)[source]

Try to extract a version string from a filename or URL.

Parameters

path (str) – The filename or URL for the package

Returns

A tuple containing:

version of the package, first index of version, length of version string, the index of the matching regex, the matching regex

Return type

tuple

Raises

UndetectableVersionError – If the URL does not match any regexes

spack.url.split_url_extension(path)[source]

Some URLs have a query string, e.g.:

  1. https://github.com/losalamos/CLAMR/blob/packages/PowerParser_v2.0.7.tgz?raw=true

  2. http://www.apache.org/dyn/closer.cgi?path=/cassandra/1.2.0/apache-cassandra-1.2.0-rc2-bin.tar.gz

  3. https://gitlab.kitware.com/vtk/vtk/repository/archive.tar.bz2?ref=v7.0.0

In (1), the query string needs to be stripped to get at the extension, but in (2) & (3), the filename is IN a single final query argument.

This strips the URL into three pieces: prefix, ext, and suffix. The suffix contains anything that was stripped off the URL to get at the file extension. In (1), it will be '?raw=true', but in (2), it will be empty. In (3) the suffix is a parameter that follows after the file extension, e.g.:

  1. ('https://github.com/losalamos/CLAMR/blob/packages/PowerParser_v2.0.7', '.tgz', '?raw=true')

  2. ('http://www.apache.org/dyn/closer.cgi?path=/cassandra/1.2.0/apache-cassandra-1.2.0-rc2-bin', '.tar.gz', None)

  3. ('https://gitlab.kitware.com/vtk/vtk/repository/archive', '.tar.bz2', '?ref=v7.0.0')

spack.url.strip_name_suffixes(path, version)[source]

Most tarballs contain a package name followed by a version number. However, some also contain extraneous information in-between the name and version:

  • rgb-1.0.6

  • converge_install_2.3.16

  • jpegsrc.v9b

These strings are not part of the package name and should be ignored. This function strips the version number and any extraneous suffixes off and returns the remaining string. The goal is that the name is always the last thing in path:

  • rgb

  • converge

  • jpeg

Parameters
  • path (str) – The filename or URL for the package

  • version (str) – The version detected for this URL

Returns

The path with any extraneous suffixes removed

Return type

str

spack.url.strip_query_and_fragment(path)[source]
spack.url.strip_version_suffixes(path)[source]

Some tarballs contain extraneous information after the version:

  • bowtie2-2.2.5-source

  • libevent-2.0.21-stable

  • cuda_8.0.44_linux.run

These strings are not part of the version number and should be ignored. This function strips those suffixes off and returns the remaining string. The goal is that the version is always the last thing in path:

  • bowtie2-2.2.5

  • libevent-2.0.21

  • cuda_8.0.44

Parameters

path (str) – The filename or URL for the package

Returns

The path with any extraneous suffixes removed

Return type

str

spack.url.substitute_version(path, new_version)[source]

Given a URL or archive name, find the version in the path and substitute the new version for it. Replace all occurrences of the version if they don’t overlap with the package name.

Simple example:

substitute_version('http://www.mr511.de/software/libelf-0.8.13.tar.gz', '2.9.3')
>>> 'http://www.mr511.de/software/libelf-2.9.3.tar.gz'

Complex example:

substitute_version('https://www.hdfgroup.org/ftp/HDF/releases/HDF4.2.12/src/hdf-4.2.12.tar.gz', '2.3')
>>> 'https://www.hdfgroup.org/ftp/HDF/releases/HDF2.3/src/hdf-2.3.tar.gz'
spack.url.substitution_offsets(path)[source]

This returns offsets for substituting versions and names in the provided path. It is a helper for substitute_version().

spack.url.wildcard_version(path)[source]

Find the version in the supplied path, and return a regular expression that will match this path with any version in its place.

spack.user_environment module

spack.user_environment.environment_modifications_for_spec(spec, view=None, set_package_py_globals=True)[source]

List of environment (shell) modifications to be processed for spec.

This list is specific to the location of the spec or its projection in the view.

Parameters
  • spec (spack.spec.Spec) – spec for which to list the environment modifications

  • view – view associated with the spec passed as first argument

  • set_package_py_globals (bool) – whether or not to set the global variables in the package.py files (this may be problematic when using buildcaches that have been built on a different but compatible OS)

spack.user_environment.prefix_inspections(platform)[source]

Get list of prefix inspections for platform

Parameters

platform (str) – the name of the platform to consider. The platform determines what environment variables Spack will use for some inspections.

Returns

A dictionary mapping subdirectory names to lists of environment

variables to modify with that directory if it exists.

spack.user_environment.spack_loaded_hashes_var = 'SPACK_LOADED_HASHES'

Environment variable name Spack uses to track individually loaded packages

spack.user_environment.unconditional_environment_modifications(view)[source]

List of environment (shell) modifications to be processed for view.

This list does not depend on the specs in this environment

spack.variant module

The variant module contains data structures that are needed to manage variants both in packages and in specs.

class spack.variant.AbstractVariant(name, value, propagate=False)[source]

Bases: object

A variant that has not yet decided who it wants to be. It behaves like a multi valued variant which could do things.

This kind of variant is generated during parsing of expressions like foo=bar and differs from multi valued variants because it will satisfy any other variant with the same name. This is because it could do it if it grows up to be a multi valued variant with the right set of values.

compatible(other)[source]

Returns True if self and other are compatible, False otherwise.

As there is no semantic check, two VariantSpec are compatible if either they contain the same value or they are both multi-valued.

Parameters

other – instance against which we test compatibility

Returns

True or False

Return type

bool

constrain(other)[source]

Modify self to match all the constraints for other if both instances are multi-valued. Returns True if self changed, False otherwise.

Parameters

other – instance against which we constrain self

Returns

True or False

Return type

bool

copy()[source]

Returns an instance of a variant equivalent to self

Returns

a copy of self

Return type

AbstractVariant

>>> a = MultiValuedVariant('foo', True)
>>> b = a.copy()
>>> assert a == b
>>> assert a is not b
static from_node_dict(name, value)[source]

Reconstruct a variant from a node dict.

satisfies(other)[source]

Returns true if other.name == self.name, because any value that other holds and is not in self yet could be added.

Parameters

other – constraint to be met for the method to return True

Returns

True or False

Return type

bool

property value

Returns a tuple of strings containing the values stored in the variant.

Returns

values stored in the variant

Return type

tuple

yaml_entry()[source]

Returns a key, value tuple suitable to be an entry in a yaml dict.

Returns

(name, value_representation)

Return type

tuple

class spack.variant.BoolValuedVariant(name, value, propagate=False)[source]

Bases: SingleValuedVariant

A variant that can hold either True or False.

BoolValuedVariant can also hold the value ‘*’, for coerced comparisons between foo=* and +foo or ~foo.

class spack.variant.DisjointSetsOfValues(*sets)[source]

Bases: Sequence

Allows combinations from one of many mutually exclusive sets.

The value ('none',) is reserved to denote the empty set and therefore no other set can contain the item 'none'.

Parameters

*sets (list) – mutually exclusive sets of values

allow_empty_set()[source]

Adds the empty set to the current list of disjoint sets.

feature_values

Attribute used to track values which correspond to features which can be enabled or disabled as understood by the package’s build system.

prohibit_empty_set()[source]

Removes the empty set from the current list of disjoint sets.

property validator
with_default(default)[source]

Sets the default value and returns self.

with_error(error_fmt)[source]

Sets the error message format and returns self.

with_non_feature_values(*values)[source]

Marks a few values as not being tied to a feature.

exception spack.variant.DuplicateVariantError(message, long_message=None)[source]

Bases: SpecError

Raised when the same variant occurs in a spec twice.

exception spack.variant.InconsistentValidationError(vspec, variant)[source]

Bases: SpecError

Raised if the wrong validator is used to validate a variant.

exception spack.variant.InvalidVariantForSpecError(variant, when, spec)[source]

Bases: SpecError

Raised when an invalid conditional variant is specified.

exception spack.variant.InvalidVariantValueCombinationError(message, long_message=None)[source]

Bases: SpecError

Raised when a variant has values ‘*’ or ‘none’ with other values.

exception spack.variant.InvalidVariantValueError(variant, invalid_values, pkg)[source]

Bases: SpecError

Raised when a valid variant has at least an invalid value.

class spack.variant.MultiValuedVariant(name, value, propagate=False)[source]

Bases: AbstractVariant

A variant that can hold multiple values at once.

append(value)[source]

Add another value to this multi-valued variant.

satisfies(other)[source]

Returns true if other.name == self.name and other.value is a strict subset of self. Does not try to validate.

Parameters

other – constraint to be met for the method to return True

Returns

True or False

Return type

bool

exception spack.variant.MultipleValuesInExclusiveVariantError(variant, pkg)[source]

Bases: SpecError, ValueError

Raised when multiple values are present in a variant that wants only one.

class spack.variant.SingleValuedVariant(name, value, propagate=False)[source]

Bases: AbstractVariant

A variant that can hold multiple values, but one at a time.

compatible(other)[source]

Returns True if self and other are compatible, False otherwise.

As there is no semantic check, two VariantSpec are compatible if either they contain the same value or they are both multi-valued.

Parameters

other – instance against which we test compatibility

Returns

True or False

Return type

bool

constrain(other)[source]

Modify self to match all the constraints for other if both instances are multi-valued. Returns True if self changed, False otherwise.

Parameters

other – instance against which we constrain self

Returns

True or False

Return type

bool

satisfies(other)[source]

Returns true if other.name == self.name, because any value that other holds and is not in self yet could be added.

Parameters

other – constraint to be met for the method to return True

Returns

True or False

Return type

bool

yaml_entry()[source]

Returns a key, value tuple suitable to be an entry in a yaml dict.

Returns

(name, value_representation)

Return type

tuple

exception spack.variant.UnknownVariantError(spec, variants)[source]

Bases: SpecError

Raised when an unknown variant occurs in a spec.

exception spack.variant.UnsatisfiableVariantSpecError(provided, required)[source]

Bases: UnsatisfiableSpecError

Raised when a spec variant conflicts with package constraints.

class spack.variant.Value(value, when)[source]

Bases: object

Conditional value that might be used in variants.

class spack.variant.Variant(name, default, description, values=(True, False), multi=False, validator=None, sticky=False)[source]

Bases: object

Represents a variant in a package, as declared in the variant directive.

property allowed_values

Returns a string representation of the allowed values for printing purposes

Returns

representation of the allowed values

Return type

str

make_default()[source]

Factory that creates a variant holding the default value.

Returns

instance of the proper variant

Return type

MultiValuedVariant or SingleValuedVariant or BoolValuedVariant

make_variant(value)[source]

Factory that creates a variant holding the value passed as a parameter.

Parameters

value – value that will be hold by the variant

Returns

instance of the proper variant

Return type

MultiValuedVariant or SingleValuedVariant or BoolValuedVariant

validate_or_raise(vspec, pkg_cls=None)[source]

Validate a variant spec against this package variant. Raises an exception if any error is found.

Parameters
Raises
property variant_cls

Proper variant class to be used for this configuration.

class spack.variant.VariantMap(spec)[source]

Bases: HashableMap

Map containing variant instances. New values can be added only if the key is not already present.

property concrete

Returns True if the spec is concrete in terms of variants.

Returns

True or False

Return type

bool

constrain(other)[source]

Add all variants in other that aren’t in self to self. Also constrain all multi-valued variants that are already present. Return True if self changed, False otherwise

Parameters

other (VariantMap) – instance against which we constrain self

Returns

True or False

Return type

bool

copy()[source]

Return an instance of VariantMap equivalent to self.

Returns

a copy of self

Return type

VariantMap

dict
satisfies(other, strict=False)[source]

Returns True if this VariantMap is more constrained than other, False otherwise.

Parameters
  • other (VariantMap) – VariantMap instance to satisfy

  • strict (bool) – if True return False if a key is in other and not in self, otherwise discard that key and proceed with evaluation

Returns

True or False

Return type

bool

substitute(vspec)[source]

Substitutes the entry under vspec.name with vspec.

Parameters

vspec – variant spec to be substituted

spack.variant.any_combination_of(*values)[source]

Multi-valued variant that allows any combination of the specified values, and also allows the user to specify ‘none’ (as a string) to choose none of them.

It is up to the package implementation to handle the value ‘none’ specially, if at all.

Parameters

*values – allowed variant values

Returns

a properly initialized instance of DisjointSetsOfValues

spack.variant.auto_or_any_combination_of(*values)[source]

Multi-valued variant that allows any combination of a set of values (but not the empty set) or ‘auto’.

Parameters

*values – allowed variant values

Returns

a properly initialized instance of DisjointSetsOfValues

spack.variant.conditional(*values, **kwargs)[source]

Conditional values that can be used in variant declarations.

spack.variant.disjoint_sets(*sets)[source]

Multi-valued variant that allows any combination picking from one of multiple disjoint sets of values, and also allows the user to specify ‘none’ (as a string) to choose none of them.

It is up to the package implementation to handle the value ‘none’ specially, if at all.

Parameters

*sets

Returns

a properly initialized instance of DisjointSetsOfValues

spack.variant.implicit_variant_conversion(method)[source]

Converts other to type(self) and calls method(self, other)

Parameters

method – any predicate method that takes another variant as an argument

Returns: decorated method

spack.variant.substitute_abstract_variants(spec)[source]

Uses the information in spec.package to turn any variant that needs it into a SingleValuedVariant.

This method is best effort. All variants that can be substituted will be substituted before any error is raised.

Parameters

spec – spec on which to operate the substitution

spack.verify module

class spack.verify.VerificationResults[source]

Bases: object

add_error(path, field)[source]
has_errors()[source]
json_string()[source]
spack.verify.check_entry(path, data)[source]
spack.verify.check_file_manifest(filename)[source]
spack.verify.check_spec_manifest(spec)[source]
spack.verify.compute_hash(path)[source]
spack.verify.create_manifest_entry(path)[source]
spack.verify.write_manifest(spec)[source]

spack.version module

This module implements Version and version-ish objects. These are:

Version

A single version of a package.

VersionRange

A range of versions of a package.

VersionList

A list of Versions and VersionRanges.

All of these types support the following operations, which can be called on any of the types:

__eq__, __ne__, __lt__, __gt__, __ge__, __le__, __hash__
__contains__
satisfies
overlaps
union
intersection
concrete
spack.version.Version(string)[source]
class spack.version.VersionList(vlist=None)[source]

Bases: object

Sorted, non-redundant list of Versions and VersionRanges.

add(version)[source]
property concrete
copy()[source]
static from_dict(dictionary)[source]

Parse dict from to_dict.

highest()[source]

Get the highest version in the list.

highest_numeric()[source]

Get the highest numeric version in the list.

intersect(other)[source]

Intersect this spec’s list with other.

Return True if the spec changed as a result; False otherwise

intersection(other)[source]
lowest()[source]

Get the lowest version in the list.

overlaps(other)[source]
preferred()[source]

Get the preferred (latest) version in the list.

satisfies(other, strict=False)[source]

A VersionList satisfies another if some version in the list would satisfy some version in the other list. This uses essentially the same algorithm as overlaps() does for VersionList, but it calls satisfies() on member Versions and VersionRanges.

If strict is specified, this version list must lie entirely within the other in order to satisfy it.

to_dict()[source]

Generate human-readable dict for YAML.

union(other)[source]
update(other)[source]
class spack.version.VersionRange(start, end)[source]

Bases: object

property concrete
highest()[source]
intersection(other)[source]
lowest()[source]
overlaps(other)[source]
satisfies(other)[source]

x.satisfies(y) in general means that x and y have a non-zero intersection. For VersionRange this means they overlap.

satisfies is a commutative binary operator, meaning that x.satisfies(y) if and only if y.satisfies(x).

Note: in some cases we have the keyword x.satisfies(y, strict=True) to mean strict set inclusion, which is not commutative. However, this lacks in VersionRange for unknown reasons.

Examples - 1:3 satisfies 2:4, as their intersection is 2:3. - 1:2 does not satisfy 3:4, as their intersection is empty. - 4.5:4.7 satisfies 4.7.2:4.8, as their intersection is 4.7.2:4.7

union(other)[source]
spack.version.ver(obj)[source]

Parses a Version, VersionRange, or VersionList from a string or list of strings.

Module contents

spack.spack_version_info = (0, 19, 1)

(major, minor, micro, dev release) tuple