spack.util package

Subpackages

Submodules

spack.util.archive module

class spack.util.archive.ChecksumWriter(fileobj, algorithm=<built-in function openssl_sha256>)[source]

Bases: BufferedIOBase

Checksum writer computes a checksum while writing to a file.

close()[source]

Flush and close the IO object.

This method has no effect if the file is already closed.

property closed
fileno()[source]

Returns underlying file descriptor if one exists.

OSError is raised if the IO object does not use a file descriptor.

flush()[source]

Flush write buffers, if applicable.

This is not implemented for read-only and non-blocking streams.

hexdigest()[source]
myfileobj = None
peek(n)[source]
read(size=-1)[source]

Read and return up to n bytes.

If the argument is omitted, None, or negative, reads and returns all data until EOF.

If the argument is positive, and the underlying raw stream is not ‘interactive’, multiple raw reads may be issued to satisfy the byte count (unless EOF is reached first). But for interactive raw streams (as well as sockets and pipes), at most one raw read will be issued, and a short result does not imply that EOF is imminent.

Returns an empty bytes object on EOF.

Returns None if the underlying raw stream was open in non-blocking mode and no data is available at the moment.

read1(size=-1)[source]

Read and return up to n bytes, with at most one read() call to the underlying raw stream. A short result does not imply that EOF is imminent.

Returns an empty bytes object on EOF.

readable()[source]

Return whether object was opened for reading.

If False, read() will raise OSError.

readline(size=-1)[source]

Read and return a line from the stream.

If size is specified, at most size bytes will be read.

The line terminator is always b’n’ for binary files; for text files, the newlines argument to open can be used to select the line terminator(s) recognized.

rewind()[source]
seek(offset, whence=0)[source]

Change the stream position to the given byte offset.

offset

The stream position, relative to ‘whence’.

whence

The relative position to seek from.

The offset is interpreted relative to the position indicated by whence. Values for whence are:

  • os.SEEK_SET or 0 – start of stream (the default); offset should be zero or positive

  • os.SEEK_CUR or 1 – current stream position; offset may be negative

  • os.SEEK_END or 2 – end of stream; offset is usually negative

Return the new absolute position.

seekable()[source]

Return whether object supports random access.

If False, seek(), tell() and truncate() will raise OSError. This method may need to do a test seek().

tell()[source]

Return current stream position.

writable()[source]

Return whether object was opened for writing.

If False, write() will raise OSError.

write(data)[source]

Write the given buffer to the IO stream.

Returns the number of bytes written, which is always the length of b in bytes.

Raises BlockingIOError if the buffer is full and the underlying raw stream cannot accept more data at the moment.

spack.util.archive.default_path_to_name(path: str) str[source]

Converts a path to a tarfile name, which uses posix path separators.

spack.util.archive.gzip_compressed_tarfile(path)[source]

Create a reproducible, gzip compressed tarfile, and keep track of shasums of both the compressed and uncompressed tarfile. Reproduciblity is achived by normalizing the gzip header (no file name and zero mtime).

Yields a tuple of the following:

tarfile.TarFile: tarfile object ChecksumWriter: checksum of the gzip compressed tarfile ChecksumWriter: checksum of the uncompressed tarfile

spack.util.archive.reproducible_tarfile_from_prefix(tar: ~tarfile.TarFile, prefix: str, *, include_parent_directories: bool = False, skip: ~typing.Callable[[~posix.DirEntry], bool] = <function <lambda>>, path_to_name: ~typing.Callable[[str], str] = <function default_path_to_name>) None[source]

Create a tarball from a given directory. Only adds regular files, symlinks and dirs. Skips devices, fifos. Preserves hardlinks. Normalizes permissions like git. Tar entries are added in depth-first pre-order, with dir entries partitioned by file | dir, and sorted lexicographically, for reproducibility. Partitioning ensures only one dir is in memory at a time, and sorting improves compression.

Parameters:
  • tar – tarfile object opened in write mode

  • prefix – path to directory to tar (either absolute or relative)

  • include_parent_directories – whether to include every directory leading up to prefix in the tarball

  • skip – function that receives a DirEntry and returns True if the entry should be skipped, whether it is a file or directory. Default implementation does not skip anything.

  • path_to_name – function that converts a path string to a tarfile entry name, which should be in posix format. Not only is it necessary to transform paths in certain cases, such as windows path to posix format, but it can also be used to prepend a directory to each entry even if it does not exist on the filesystem. The default implementation drops the leading slash on posix and the drive letter on windows for absolute paths, and formats as a posix.

spack.util.classes module

spack.util.classes.list_classes(parent_module, mod_path)[source]

Given a parent path (e.g., spack.platforms or spack.analyzers), use list_modules to derive the module names, and then mod_to_class to derive class names. Import the classes and return them in a list

spack.util.compression module

class spack.util.compression.BZipFileType[source]

Bases: CompressedFileTypeInterface

extension: str = 'bz2'
name: str = 'bzip2 compressed data'
peek(stream: BinaryIO, num_bytes: int) BytesIO | None[source]

This method returns the first num_bytes of a decompressed stream. Returns None if no builtin support for decompression.

class spack.util.compression.CompressedFileTypeInterface[source]

Bases: FileTypeInterface

Interface class for FileTypes that include compression information

peek(stream: BinaryIO, num_bytes: int) BytesIO | None[source]

This method returns the first num_bytes of a decompressed stream. Returns None if no builtin support for decompression.

class spack.util.compression.FileTypeInterface[source]

Bases: object

Base interface class for describing and querying file type information. FileType describes information about a single file type such as typical extension and byte header properties, and provides an interface to check a given file against said type based on magic number.

This class should be subclassed each time a new type is to be described.

Subclasses should each describe a different type of file. In order to do so, they must define the extension string, magic number, and header offset (if non zero). If a class has multiple magic numbers, it will need to override the method describing that file type’s magic numbers and the method that checks a types magic numbers against a given file’s.

OFFSET = 0
extension: str
classmethod header_size() int[source]

Return size of largest magic number associated with file type

classmethod magic_numbers() List[bytes][source]

Return a list of all potential magic numbers for a filetype

matches_magic(stream: BinaryIO) bool[source]

Returns true if the stream matches the current file type by any of its magic numbers. Resets stream to original position.

Parameters:

stream – file byte stream

name: str
class spack.util.compression.GZipFileType[source]

Bases: CompressedFileTypeInterface

extension: str = 'gz'
name: str = 'gzip compressed data'
peek(stream: BinaryIO, num_bytes: int) BytesIO | None[source]

This method returns the first num_bytes of a decompressed stream. Returns None if no builtin support for decompression.

class spack.util.compression.LzmaFileType[source]

Bases: CompressedFileTypeInterface

extension: str = 'xz'
name: str = 'xz compressed data'
peek(stream: BinaryIO, num_bytes: int) BytesIO | None[source]

This method returns the first num_bytes of a decompressed stream. Returns None if no builtin support for decompression.

spack.util.compression.MAX_BYTES_ARCHIVE_HEADER = 265

Maximum number of bytes to read from a file to determine any archive type. Tar is the largest.

spack.util.compression.SUPPORTED_FILETYPES: List[FileTypeInterface] = [<spack.util.compression.BZipFileType object>, <spack.util.compression.ZCompressedFileType object>, <spack.util.compression.GZipFileType object>, <spack.util.compression.LzmaFileType object>, <spack.util.compression.TarFileType object>, <spack.util.compression.ZipFleType object>]

Collection of supported archive and compression file type identifier classes.

class spack.util.compression.TarFileType[source]

Bases: FileTypeInterface

OFFSET = 257
extension: str = 'tar'
name: str = 'tar archive'
class spack.util.compression.ZCompressedFileType[source]

Bases: CompressedFileTypeInterface

extension: str = 'Z'
name: str = "compress'd data"
class spack.util.compression.ZipFleType[source]

Bases: FileTypeInterface

extension: str = 'zip'
name: str = 'Zip archive data'
spack.util.compression.decompressor_for(path: str, extension: str | None = None)[source]

Returns appropriate decompression/extraction algorithm function pointer for provided extension. If extension is none, it is computed from the path and the decompression function is derived from that information.

spack.util.compression.decompressor_for_nix(extension: str) Callable[[str], Any][source]

Returns a function pointer to appropriate decompression algorithm based on extension type and unix specific considerations i.e. a reasonable expectation system utils like gzip, bzip2, and xz are available

Parameters:

extension – path of the archive file requiring decompression

spack.util.compression.decompressor_for_win(extension: str) Callable[[str], Any][source]

Returns a function pointer to appropriate decompression algorithm based on extension type and Windows specific considerations

Windows natively vendors only tar, no other archive/compression utilities So we must rely exclusively on Python module support for all compression operations, tar for tarballs and zip files, and 7zip for Z compressed archives and files as Python does not provide support for the UNIX compress algorithm

spack.util.compression.extension_from_magic_numbers(path: str, decompress: bool = False) str | None[source]

Return typical extension without leading . of a compressed file or archive at the given path, based on its magic numbers, similar to the file utility. Notice that the extension returned from this function may not coincide with the file’s given extension.

Parameters:
  • path – file to determine extension of

  • decompress – If True, method will peek into decompressed file to check for archive file types. If False, the method will return only the top-level extension (for example gz and not tar.gz).

Returns:

Spack recognized archive file extension as determined by file’s magic number and file name. If file is not on system or is of a type not recognized by Spack as an archive or compression type, None is returned. If the file is classified as a compressed tarball, the extension is abbreviated (for instance tgz not tar.gz) if that matches the file’s given extension.

spack.util.compression.extension_from_magic_numbers_by_stream(stream: BinaryIO, decompress: bool = False) str | None[source]

Returns the typical extension for the opened file, without leading ., based on its magic numbers.

If the stream does not represent file type recongized by Spack (see SUPPORTED_FILETYPES), the method will return None

Parameters:
  • stream – stream representing a file on system

  • decompress – if True, compressed files are checked for archive types beneath compression. For example tar.gz if True versus only gz if False.

spack.util.cpus module

spack.util.cpus.cpus_available()[source]

Returns the number of CPUs available for the current process, or the number of phyiscal CPUs when that information cannot be retrieved. The number of available CPUs might differ from the number of physical CPUs when using spack through Slurm or container runtimes.

spack.util.cpus.determine_number_of_jobs(*, parallel: bool = False, max_cpus: int = 2, config: Configuration | None = None) int[source]

Packages that require sequential builds need 1 job. Otherwise we use the number of jobs set on the command line. If not set, then we use the config defaults (which is usually set through the builtin config scope), but we cap to the number of CPUs available to avoid oversubscription.

Parameters:
  • parallel – true when package supports parallel builds

  • max_cpus – maximum number of CPUs to use (defaults to cpus_available())

  • config – configuration object (defaults to global config)

spack.util.crypto module

class spack.util.crypto.Checker(hexdigest: str, **kwargs)[source]

Bases: object

A checker checks files against one particular hex digest. It will automatically determine what hashing algorithm to used based on the length of the digest it’s initialized with. e.g., if the digest is 32 hex characters long this will use md5.

Example: know your tarball should hash to ‘abc123’. You want to check files against this. You would use this class like so:

hexdigest = 'abc123'
checker = Checker(hexdigest)
success = checker.check('downloaded.tar.gz')

After the call to check, the actual checksum is available in checker.sum, in case it’s needed for error output.

You can trade read performance and memory usage by adjusting the block_size optional arg. By default it’s a 1MB (2**20 bytes) buffer.

check(filename: str) bool[source]

Read the file with the specified name and check its checksum against self.hexdigest. Return True if they match, False otherwise. Actual checksum is stored in self.sum.

property hash_name: str

Get the name of the hash function this Checker is using.

class spack.util.crypto.DeprecatedHash(hash_alg, alert_fn, disable_security_check)[source]

Bases: object

spack.util.crypto.bit_length(num)[source]

Number of bits required to represent an integer in binary.

spack.util.crypto.checksum(hashlib_algo: Callable[[], hashlib._Hash], filename: str, *, block_size: int = 1048576) str[source]

Returns a hex digest of the filename generated using an algorithm from hashlib.

spack.util.crypto.checksum_stream(hashlib_algo: Callable[[], hashlib._Hash], fp: BinaryIO, *, block_size: int = 1048576) str[source]

Returns a hex digest of the stream generated using given algorithm from hashlib.

spack.util.crypto.hash_algo_for_digest(hexdigest: str) str[source]

Gets name of the hash algorithm for a hex digest.

spack.util.crypto.hash_fun_for_algo(algo: str) Callable[[], hashlib._Hash][source]

Get a function that can perform the specified hash algorithm.

spack.util.crypto.hash_fun_for_digest(hexdigest: str) Callable[[], hashlib._Hash][source]

Gets a hash function corresponding to a hex digest.

spack.util.crypto.hashes = {'md5': 16, 'sha1': 20, 'sha224': 28, 'sha256': 32, 'sha384': 48, 'sha512': 64}

Set of hash algorithms that Spack can use, mapped to digest size in bytes

spack.util.crypto.prefix_bits(byte_array, bits)[source]

Return the first <bits> bits of a byte array as an integer.

spack.util.debug module

Debug signal handler: prints a stack trace and enters interpreter.

register_interrupt_handler() enables a ctrl-C handler that prints a stack trace and drops the user into an interpreter.

class spack.util.debug.ForkablePdb(stdout_fd=None, stderr_fd=None)[source]

Bases: Pdb

This class allows the python debugger to follow forked processes and can set tracepoints allowing the Python Debugger Pdb to be used from a python multiprocessing child process.

This is used the same way one would normally use Pdb, simply import this class and use as a drop in for Pdb, although the syntax here is slightly different, requiring the instantiton of this class, i.e. ForkablePdb().set_trace().

This should be used when attempting to call a debugger from a child process spawned by the python multiprocessing such as during the run of Spack.install, or any where else Spack spawns a child process.

spack.util.debug.debug_handler(sig, frame)[source]

Interrupt running process, and provide a python prompt for interactive debugging.

spack.util.debug.register_interrupt_handler()[source]

Print traceback and enter an interpreter on Ctrl-C

spack.util.editor module

Module for finding the user’s preferred text editor.

Defines one function, editor(), which invokes the editor defined by the user’s VISUAL environment variable if set. We fall back to the editor defined by the EDITOR environment variable if VISUAL is not set or the specified editor fails (e.g. no DISPLAY for a graphical editor). If neither variable is set, we fall back to one of several common editors, raising an EnvironmentError if we are unable to find one.

spack.util.editor.editor(*args: str, exec_fn: ~typing.Callable[[str, ~typing.List[str]], int] = <built-in function execv>) bool[source]

Invoke the user’s editor.

This will try to execute the following, in order:

  1. $VISUAL <args> # the “visual” editor (per POSIX)

  2. $EDITOR <args> # the regular editor (per POSIX)

  3. some default editor (see _default_editors) with <args>

If an environment variable isn’t defined, it is skipped. If it points to something that can’t be executed, we’ll print a warning. And if we can’t find anything that can be executed after searching the full list above, we’ll raise an error.

Parameters:

args – args to pass to editor

Optional Arguments:
exec_fn: invoke this function to run; use spack.util.editor.executable if you

want something that returns, instead of the default os.execv().

spack.util.editor.executable(exe: str, args: List[str]) int[source]

Wrapper that makes spack.util.executable.Executable look like os.execv().

Use this with editor() if you want it to return instead of running execv.

spack.util.elf module

class spack.util.elf.CStringType[source]

Bases: object

PT_INTERP = 1
RPATH = 2
class spack.util.elf.ELF_CONSTANTS[source]

Bases: object

CLASS32 = 1
CLASS64 = 2
DATA2LSB = 1
DATA2MSB = 2
DT_NEEDED = 1
DT_NULL = 0
DT_RPATH = 15
DT_RUNPATH = 29
DT_SONAME = 14
DT_STRTAB = 5
ET_DYN = 3
ET_EXEC = 2
MAGIC = b'\x7fELF'
PT_DYNAMIC = 2
PT_INTERP = 3
PT_LOAD = 1
SHT_STRTAB = 3
exception spack.util.elf.ElfCStringUpdatesFailed(rpath: UpdateCStringAction | None, pt_interp: UpdateCStringAction | None)[source]

Bases: Exception

class spack.util.elf.ElfFile[source]

Bases: object

Parsed ELF file.

byte_order: str
dt_needed_strs: List[bytes]
dt_needed_strtab_offsets: List[int]
dt_rpath_offset: int
dt_rpath_str: bytes
dt_soname_str: bytes
dt_soname_strtab_offset: int
elf_hdr: ElfHeader
has_needed: bool
has_pt_dynamic: bool
has_pt_interp: bool
has_rpath: bool
has_soname: bool
is_64_bit: bool
is_little_endian: bool
is_runpath: bool
pt_dynamic_p_filesz: int
pt_dynamic_p_offset: int
pt_dynamic_strtab_offset: int
pt_interp_p_filesz: int
pt_interp_p_offset: int
pt_interp_str: bytes
pt_load: List[Tuple[int, int]]
rpath_strtab_offset: int
class spack.util.elf.ElfHeader(e_type, e_machine, e_version, e_entry, e_phoff, e_shoff, e_flags, e_ehsize, e_phentsize, e_phnum, e_shentsize, e_shnum, e_shstrndx)[source]

Bases: NamedTuple

e_ehsize: int

Alias for field number 7

e_entry: int

Alias for field number 3

e_flags: int

Alias for field number 6

e_machine: int

Alias for field number 1

e_phentsize: int

Alias for field number 8

e_phnum: int

Alias for field number 9

e_phoff: int

Alias for field number 4

e_shentsize: int

Alias for field number 10

e_shnum: int

Alias for field number 11

e_shoff: int

Alias for field number 5

e_shstrndx: int

Alias for field number 12

e_type: int

Alias for field number 0

e_version: int

Alias for field number 2

exception spack.util.elf.ElfParsingError[source]

Bases: Exception

class spack.util.elf.ProgramHeader32(p_type, p_offset, p_vaddr, p_paddr, p_filesz, p_memsz, p_flags, p_align)[source]

Bases: NamedTuple

p_align: int

Alias for field number 7

p_filesz: int

Alias for field number 4

p_flags: int

Alias for field number 6

p_memsz: int

Alias for field number 5

p_offset: int

Alias for field number 1

p_paddr: int

Alias for field number 3

p_type: int

Alias for field number 0

p_vaddr: int

Alias for field number 2

class spack.util.elf.ProgramHeader64(p_type, p_flags, p_offset, p_vaddr, p_paddr, p_filesz, p_memsz, p_align)[source]

Bases: NamedTuple

p_align: int

Alias for field number 7

p_filesz: int

Alias for field number 5

p_flags: int

Alias for field number 1

p_memsz: int

Alias for field number 6

p_offset: int

Alias for field number 2

p_paddr: int

Alias for field number 4

p_type: int

Alias for field number 0

p_vaddr: int

Alias for field number 3

class spack.util.elf.SectionHeader(sh_name, sh_type, sh_flags, sh_addr, sh_offset, sh_size, sh_link, sh_info, sh_addralign, sh_entsize)[source]

Bases: NamedTuple

sh_addr: int

Alias for field number 3

sh_addralign: int

Alias for field number 8

sh_entsize: int

Alias for field number 9

sh_flags: int

Alias for field number 2

sh_info: int

Alias for field number 7

Alias for field number 6

sh_name: int

Alias for field number 0

sh_offset: int

Alias for field number 4

sh_size: int

Alias for field number 5

sh_type: int

Alias for field number 1

class spack.util.elf.UpdateCStringAction(old_value: bytes, new_value: bytes, offset: int)[source]

Bases: object

apply(f: BinaryIO) None[source]
property inplace: bool
spack.util.elf.delete_rpath(path: str) None[source]

Modifies a binary to remove the rpath. It zeros out the rpath string and also drops the DT_R(UN)PATH entry from the dynamic section, so it doesn’t show up in ‘readelf -d file’, nor in ‘strings file’.

spack.util.elf.find_strtab_size_at_offset(f: BinaryIO, elf: ElfFile, offset: int) int[source]

Retrieve the size of a string table section at a particular known offset

Parameters:
  • f – file handle

  • elf – ELF file parser data

  • offset – offset of the section in the file (i.e. sh_offset)

Returns:

the size of the string table in bytes

Return type:

int

spack.util.elf.get_elf_compat(path)[source]

Get a triplet (EI_CLASS, EI_DATA, e_machine) from an ELF file, which can be used to see if two ELF files are compatible.

spack.util.elf.get_interpreter(path: str) str | None[source]

Returns the interpreter of the given file as UTF-8 string, or None if not set.

spack.util.elf.get_rpaths(path: str) List[str] | None[source]

Returns list of rpaths of the given file as UTF-8 strings, or None if not set.

spack.util.elf.parse_c_string(byte_string: bytes, start: int = 0) bytes[source]

Retrieve a C-string at a given offset in a byte string

Parameters:
  • byte_string – String

  • start – Offset into the string

Returns:

A copy of the C-string excluding the terminating null byte

Return type:

bytes

spack.util.elf.parse_elf(f: BinaryIO, interpreter: bool = False, dynamic_section: bool = False, only_header: bool = False) ElfFile[source]

Given a file handle f for an ELF file opened in binary mode, return an ElfFile object that is stores data about rpaths

spack.util.elf.parse_header(f: BinaryIO, elf: ElfFile) None[source]
spack.util.elf.parse_program_headers(f: BinaryIO, elf: ElfFile) None[source]

Parse program headers

Parameters:
  • f – file handle

  • elf – ELF file parser data

spack.util.elf.parse_pt_dynamic(f: BinaryIO, elf: ElfFile) None[source]

Parse the dynamic section of an ELF file

Parameters:
  • f – file handle

  • elf – ELF file parse data

spack.util.elf.parse_pt_interp(f: BinaryIO, elf: ElfFile) None[source]

Parse the interpreter (i.e. absolute path to the dynamic linker)

Parameters:
  • f – file handle

  • elf – ELF file parser data

spack.util.elf.pt_interp(path: str) str | None[source]

Retrieve the interpreter of an executable at path.

spack.util.elf.read_exactly(f: BinaryIO, num_bytes: int, msg: str) bytes[source]

Read exactly num_bytes at the current offset, otherwise raise a parsing error with the given error message.

Parameters:
  • f – file handle

  • num_bytes – Number of bytes to read

  • msg – Error to show when bytes cannot be read

Returns:

the num_bytes bytes that were read.

Return type:

bytes

spack.util.elf.retrieve_strtab(f: BinaryIO, elf: ElfFile, offset: int) bytes[source]

Read a full string table at the given offset, which requires looking it up in the section headers.

Parameters:
  • elf – ELF file parser data

  • vaddr – virtual address

Returns: file offset

spack.util.elf.substitute_rpath_and_pt_interp_in_place_or_raise(path: str, substitutions: Dict[bytes, bytes]) bool[source]

Returns true if the rpath and interpreter were modified, false if there was nothing to do. Raises ElfCStringUpdatesFailed if the ELF file cannot be updated in-place. This exception contains a list of actions to perform with other tools. The file is left untouched in this case.

spack.util.elf.vaddr_to_offset(elf: ElfFile, vaddr: int) int[source]

Given a virtual address, find the corresponding offset in the ELF file itself.

Parameters:
  • elf – ELF file parser data

  • vaddr – virtual address

spack.util.environment module

Set, unset or modify environment variables.

class spack.util.environment.AppendFlagsEnv(name: str, value: Any, *, separator: str = ':', trace: Trace | None = None)[source]

Bases: NameValueModifier

execute(env: MutableMapping[str, str])[source]

Apply the modification to the mapping passed as input

name
separator
trace
value
class spack.util.environment.AppendPath(name: str, value: Any, *, separator: str = ':', trace: Trace | None = None)[source]

Bases: NameValueModifier

execute(env: MutableMapping[str, str])[source]

Apply the modification to the mapping passed as input

name
separator
trace
value
class spack.util.environment.DeprioritizeSystemPaths(name: str, *, separator: str = ':', trace: Trace | None = None)[source]

Bases: NameModifier

execute(env: MutableMapping[str, str])[source]

Apply the modification to the mapping passed as input

name
separator
trace
class spack.util.environment.EnvironmentModifications(other: EnvironmentModifications | None = None, traced: None | bool = None)[source]

Bases: object

Keeps track of requests to modify the current environment.

append_flags(name: str, value: str, sep: str = ' ')[source]

Stores a request to append ‘flags’ to an environment variable.

Parameters:
  • name – name of the environment variable

  • value – flags to be appended

  • sep – separator for the flags (default: “ “)

append_path(name: str, path: str, separator: str = ':')[source]

Stores a request to append a path to list of paths.

Parameters:
  • name – name of the environment variable

  • path – path to be appended

  • separator – separator for the paths (default: os.pathsep)

apply_modifications(env: MutableMapping[str, str] | None = None)[source]

Applies the modifications and clears the list.

Parameters:

env – environment to be modified. If None, os.environ will be used.

clear()[source]

Clears the current list of modifications.

deprioritize_system_paths(name: str, separator: str = ':')[source]

Stores a request to deprioritize system paths in a path list, otherwise preserving the order.

Parameters:
  • name – name of the environment variable

  • separator – separator for the paths (default: os.pathsep)

drop(*name) bool[source]

Drop all modifications to the variable with the given name.

extend(other: EnvironmentModifications)[source]
static from_environment_diff(before: MutableMapping[str, str], after: MutableMapping[str, str], clean: bool = False) EnvironmentModifications[source]

Constructs the environment modifications from the diff of two environments.

Parameters:
  • before – environment before the modifications are applied

  • after – environment after the modifications are applied

  • clean – in addition to removing empty entries, also remove duplicate entries

static from_sourcing_file(filename: str, *arguments: str, **kwargs: Any) EnvironmentModifications[source]

Returns the environment modifications that have the same effect as sourcing the input file.

Parameters:
  • filename – the file to be sourced

  • *arguments – arguments to pass on the command line

Keyword Arguments:
  • shell (str) – the shell to use (default: bash)

  • shell_options (str) – options passed to the shell (default: -c)

  • source_command (str) – the command to run (default: source)

  • suppress_output (str) – redirect used to suppress output of command (default: &> /dev/null)

  • concatenate_on_success (str) – operator used to execute a command only when the previous command succeeds (default: &&)

  • exclude ([str or re]) – ignore any modifications of these variables (default: [])

  • include ([str or re]) – always respect modifications of these variables (default: []). Supersedes any excluded variables.

  • clean (bool) – in addition to removing empty entries, also remove duplicate entries (default: False).

group_by_name() Dict[str, List[NameModifier | NameValueModifier]][source]

Returns a dict of the current modifications keyed by variable name.

is_unset(variable_name: str) bool[source]

Returns True if the last modification to a variable is to unset it, False otherwise.

prepend_path(name: str, path: str, separator: str = ':')[source]

Stores a request to prepend a path to list of paths.

Parameters:
  • name – name of the environment variable

  • path – path to be prepended

  • separator – separator for the paths (default: os.pathsep)

prune_duplicate_paths(name: str, separator: str = ':')[source]

Stores a request to remove duplicates from a path list, otherwise preserving the order.

Parameters:
  • name – name of the environment variable

  • separator – separator for the paths (default: os.pathsep)

remove_flags(name: str, value: str, sep: str = ' ')[source]

Stores a request to remove flags from an environment variable

Parameters:
  • name – name of the environment variable

  • value – flags to be removed

  • sep – separator for the flags (default: “ “)

remove_path(name: str, path: str, separator: str = ':')[source]

Stores a request to remove a path from a list of paths.

Parameters:
  • name – name of the environment variable

  • path – path to be removed

  • separator – separator for the paths (default: os.pathsep)

reversed() EnvironmentModifications[source]

Returns the EnvironmentModifications object that will reverse self

Only creates reversals for additions to the environment, as reversing unset and remove_path modifications is impossible.

Reversable operations are set(), prepend_path(), append_path(), set_path(), and append_flags().

set(name: str, value: str, *, force: bool = False, raw: bool = False)[source]

Stores a request to set an environment variable.

Parameters:
  • name – name of the environment variable

  • value – value of the environment variable

  • force – if True, audit will not consider this modification a warning

  • raw – if True, format of value string is skipped

set_path(name: str, elements: List[str], separator: str = ':')[source]

Stores a request to set an environment variable to a list of paths, separated by a character defined in input.

Parameters:
  • name – name of the environment variable

  • elements – ordered list paths

  • separator – separator for the paths (default: os.pathsep)

shell_modifications(shell: str = 'sh', explicit: bool = False, env: MutableMapping[str, str] | None = None) str[source]

Return shell code to apply the modifications and clears the list.

unset(name: str)[source]

Stores a request to unset an environment variable.

Parameters:

name – name of the environment variable

class spack.util.environment.NameModifier(name: str, *, separator: str = ':', trace: Trace | None = None)[source]

Bases: object

Base class for modifiers that act on the environment variable as a whole, and thus store just its name

execute(env: MutableMapping[str, str])[source]

Apply the modification to the mapping passed as input

name
separator
trace
class spack.util.environment.NameValueModifier(name: str, value: Any, *, separator: str = ':', trace: Trace | None = None)[source]

Bases: object

Base class for modifiers that modify the value of an environment variable.

execute(env: MutableMapping[str, str])[source]

Apply the modification to the mapping passed as input

name
separator
trace
value
class spack.util.environment.PrependPath(name: str, value: Any, *, separator: str = ':', trace: Trace | None = None)[source]

Bases: NameValueModifier

execute(env: MutableMapping[str, str])[source]

Apply the modification to the mapping passed as input

name
separator
trace
value
class spack.util.environment.PruneDuplicatePaths(name: str, *, separator: str = ':', trace: Trace | None = None)[source]

Bases: NameModifier

execute(env: MutableMapping[str, str])[source]

Apply the modification to the mapping passed as input

name
separator
trace
class spack.util.environment.RemoveFlagsEnv(name: str, value: Any, *, separator: str = ':', trace: Trace | None = None)[source]

Bases: NameValueModifier

execute(env: MutableMapping[str, str])[source]

Apply the modification to the mapping passed as input

name
separator
trace
value
class spack.util.environment.RemovePath(name: str, value: Any, *, separator: str = ':', trace: Trace | None = None)[source]

Bases: NameValueModifier

execute(env: MutableMapping[str, str])[source]

Apply the modification to the mapping passed as input

name
separator
trace
value
spack.util.environment.SYSTEM_DIR_CASE_ENTRY = '"/"|"//"|"/bin"|"/bin/"|"/bin64"|"/bin64/"|"/include"|"/include/"|"/lib"|"/lib/"|"/lib64"|"/lib64/"|"/usr"|"/usr/"|"/usr/bin"|"/usr/bin/"|"/usr/bin64"|"/usr/bin64/"|"/usr/include"|"/usr/include/"|"/usr/lib"|"/usr/lib/"|"/usr/lib64"|"/usr/lib64/"|"/usr/local"|"/usr/local/"|"/usr/local/bin"|"/usr/local/bin/"|"/usr/local/bin64"|"/usr/local/bin64/"|"/usr/local/include"|"/usr/local/include/"|"/usr/local/lib"|"/usr/local/lib/"|"/usr/local/lib64"|"/usr/local/lib64/"'

used in the compiler wrapper’s /usr/lib|/usr/lib64|…) case entry

class spack.util.environment.SetEnv(name: str, value: str, *, trace: Trace | None = None, force: bool = False, raw: bool = False)[source]

Bases: NameValueModifier

execute(env: MutableMapping[str, str])[source]

Apply the modification to the mapping passed as input

force
raw
class spack.util.environment.SetPath(name: str, value: Any, *, separator: str = ':', trace: Trace | None = None)[source]

Bases: NameValueModifier

execute(env: MutableMapping[str, str])[source]

Apply the modification to the mapping passed as input

name
separator
trace
value
class spack.util.environment.Trace(*, filename: str, lineno: int, context: str)[source]

Bases: object

Trace information on a function call

context
filename
lineno
class spack.util.environment.UnsetEnv(name: str, *, separator: str = ':', trace: Trace | None = None)[source]

Bases: NameModifier

execute(env: MutableMapping[str, str])[source]

Apply the modification to the mapping passed as input

name
separator
trace
spack.util.environment.deprioritize_system_paths(paths: List[str]) List[str][source]

Reorders input paths by putting system paths at the end of the list, otherwise preserving order.

spack.util.environment.double_quote_escape(s)[source]

Return a shell-escaped version of the string s.

This is similar to how shlex.quote works, but it escapes with double quotes instead of single quotes, to allow environment variable expansion within quoted strings.

spack.util.environment.dump_environment(path: str, environment: MutableMapping[str, str] | None = None)[source]

Dump an environment dictionary to a source-able file.

Parameters:
  • path – path of the file to write

  • environment – environment to be writte. If None os.environ is used.

spack.util.environment.env_flag(name: str) bool[source]

Given the name of an environment variable, returns True if it is set to ‘true’ or to ‘1’, False otherwise.

spack.util.environment.environment_after_sourcing_files(*files: str | Tuple[str, ...], **kwargs) Dict[str, str][source]

Returns a dictionary with the environment that one would have after sourcing the files passed as argument.

Parameters:

*files – each item can either be a string containing the path of the file to be sourced or a sequence, where the first element is the file to be sourced and the remaining are arguments to be passed to the command line

Keyword Arguments:
  • env (dict) – the initial environment (default: current environment)

  • shell (str) – the shell to use (default: /bin/bash or cmd.exe (Windows))

  • shell_options (str) – options passed to the shell (default: -c or /C (Windows))

  • source_command (str) – the command to run (default: source)

  • suppress_output (str) – redirect used to suppress output of command (default: &> /dev/null)

  • concatenate_on_success (str) – operator used to execute a command only when the previous command succeeds (default: &&)

spack.util.environment.filter_system_paths(paths: List[str]) List[str][source]

Returns a copy of the input where system paths are filtered out.

spack.util.environment.get_path(name: str) List[str][source]

Given the name of an environment variable containing multiple paths separated by ‘os.pathsep’, returns a list of the paths.

spack.util.environment.inspect_path(root: str, inspections: MutableMapping[str, List[str]], exclude: Callable[[str], bool] | None = None) EnvironmentModifications[source]

Inspects root to search for the subdirectories in inspections. Adds every path found to a list of prepend-path commands and returns it.

Parameters:
  • root – absolute path where to search for subdirectories

  • inspections – maps relative paths to a list of environment variables that will be modified if the path exists. The modifications are not performed immediately, but stored in a command object that is returned to client

  • exclude – optional callable. If present it must accept an absolute path and return True if it should be excluded from the inspection

Examples:

The following lines execute an inspection in /usr to search for /usr/include and /usr/lib64. If found we want to prepend /usr/include to CPATH and /usr/lib64 to MY_LIB64_PATH.

# Set up the dictionary containing the inspection
inspections = {
    'include': ['CPATH'],
    'lib64': ['MY_LIB64_PATH']
}

# Get back the list of command needed to modify the environment
env = inspect_path('/usr', inspections)

# Eventually execute the commands
env.apply_modifications()
spack.util.environment.is_system_path(path: str) bool[source]

Returns True if the argument is a system path, False otherwise.

spack.util.environment.path_put_first(var_name: str, directories: List[str])[source]

Puts the provided directories first in the path, adding them if they’re not already there.

spack.util.environment.path_set(var_name: str, directories: List[str])[source]

Sets the variable passed as input to the os.pathsep joined list of directories.

spack.util.environment.pickle_environment(path: str, environment: Dict[str, str] | None = None)[source]

Pickle an environment dictionary to a file.

spack.util.environment.preserve_environment(*variables: str)[source]

Ensures that the value of the environment variables passed as arguments is the same before entering to the context manager and after exiting it.

Variables that are unset before entering the context manager will be explicitly unset on exit.

Parameters:

variables – list of environment variables to be preserved

spack.util.environment.prune_duplicate_paths(paths: List[str]) List[str][source]

Returns the input list with duplicates removed, otherwise preserving order.

spack.util.environment.sanitize(environment: MutableMapping[str, str], exclude: List[str], include: List[str]) Dict[str, str][source]

Returns a copy of the input dictionary where all the keys that match an excluded pattern and don’t match an included pattern are removed.

Parameters:
  • environment (dict) – input dictionary

  • exclude (list) – literals or regex patterns to be excluded

  • include (list) – literals or regex patterns to be included

spack.util.environment.set_env(**kwargs)[source]

Temporarily sets and restores environment variables.

Variables can be set as keyword arguments to this function.

spack.util.environment.system_env_normalize(func)[source]

Decorator wrapping calls to system env modifications, converting all env variable names to all upper case on Windows, no-op on other platforms before calling env modification method.

Windows, due to a DOS holdover, treats all env variable names case insensitively, however Spack’s env modification class does not, meaning setting Path and PATH would be distinct env operations for Spack, but would cause a collision when actually performing the env modification operations on the env. Normalize all env names to all caps to prevent this collision from the Spack side.

spack.util.environment.validate(env: EnvironmentModifications, errstream: Callable[[str], None])[source]

Validates the environment modifications to check for the presence of suspicious patterns. Prompts a warning for everything that was found.

Current checks: - set or unset variables after other changes on the same variable

Parameters:
  • env – list of environment modifications

  • errstream – callable to log error messages

spack.util.executable module

class spack.util.executable.Executable(name)[source]

Bases: object

Class representing a program that can be run on the command line.

add_default_arg(*args)[source]

Add default argument(s) to the command.

add_default_env(key, value)[source]

Set an environment variable when the command is run.

Parameters:
  • key – The environment variable to set

  • value – The value to set it to

add_default_envmod(envmod)[source]

Set an EnvironmentModifications to use when the command is run.

property command

The command-line string.

Returns:

The executable and default arguments

Return type:

str

copy()[source]

Return a copy of this Executable.

property name

The executable name.

Returns:

The basename of the executable

Return type:

str

property path

The path to the executable.

Returns:

The path to the executable

Return type:

str

with_default_args(*args)[source]

Same as add_default_arg, but returns a copy of the executable.

exception spack.util.executable.ProcessError(message, long_message=None)[source]

Bases: SpackError

ProcessErrors are raised when Executables exit with an error code.

spack.util.executable.which(*args, **kwargs)[source]

Finds an executable in the path like command-line which.

If given multiple executables, returns the first one that is found. If no executables are found, returns None.

Parameters:

*args (str) – One or more executables to search for

Keyword Arguments:
  • path (list or str) – The path to search. Defaults to PATH

  • required (bool) – If set to True, raise an error if executable not found

Returns:

The first executable that is found in the path

Return type:

Executable

spack.util.file_cache module

exception spack.util.file_cache.CacheError(message, long_message=None)[source]

Bases: SpackError

class spack.util.file_cache.FileCache(root, timeout=120)[source]

Bases: object

This class manages cached data in the filesystem.

  • Cache files are fetched and stored by unique keys. Keys can be relative paths, so that there can be some hierarchy in the cache.

  • The FileCache handles locking cache files for reading and writing, so client code need not manage locks for cache entries.

cache_path(key)[source]

Path to the file in the cache for a particular key.

destroy()[source]

Remove all files under the cache root.

init_entry(key)[source]

Ensure we can access a cache file. Create a lock for it if needed.

Return whether the cache file exists yet or not.

mtime(key) float[source]

Return modification time of cache file, or -inf if it does not exist.

Time is in units returned by os.stat in the mtime field, which is platform-dependent.

read_transaction(key)[source]

Get a read transaction on a file cache item.

Returns a ReadTransaction context manager and opens the cache file for reading. You can use it like this:

with file_cache_object.read_transaction(key) as cache_file:

cache_file.read()

remove(key)[source]
write_transaction(key)[source]

Get a write transaction on a file cache item.

Returns a WriteTransaction context manager that opens a temporary file for writing. Once the context manager finishes, if nothing went wrong, moves the file into place on top of the old file atomically.

spack.util.file_permissions module

exception spack.util.file_permissions.InvalidPermissionsError(message, long_message=None)[source]

Bases: SpackError

Error class for invalid permission setters

spack.util.file_permissions.set_permissions(path, perms, group=None)[source]
spack.util.file_permissions.set_permissions_by_spec(path, spec)[source]

spack.util.format module

spack.util.format.get_version_lines(version_hashes_dict: dict, url_dict: dict | None = None) str[source]

Renders out a set of versions like those found in a package’s package.py file for a given set of versions and hashes.

Parameters:
  • version_hashes_dict (dict) – A dictionary of the form: version -> checksum.

  • url_dict (dict) – A dictionary of the form: version -> URL.

Returns:

Rendered version lines.

Return type:

(str)

spack.util.gcs module

This file contains the definition of the GCS Blob storage Class used to integrate GCS Blob storage with spack buildcache.

class spack.util.gcs.GCSBlob(url, client=None)[source]

Bases: object

GCS Blob object

Wraps some blob methods for spack functionality

delete_blob()[source]
exists()[source]
get()[source]
get_blob_byte_stream()[source]
get_blob_headers()[source]
upload_to_blob(local_file_path)[source]
class spack.util.gcs.GCSBucket(url, client=None)[source]

Bases: object

GCS Bucket Object Create a wrapper object for a GCS Bucket. Provides methods to wrap spack related tasks, such as destroy.

blob(blob_path)[source]
create()[source]
destroy(recursive=False, **kwargs)[source]

Bucket destruction method

Deletes all blobs within the bucket, and then deletes the bucket itself.

Uses GCS Batch operations to bundle several delete operations together.

exists()[source]
get_all_blobs(recursive=True, relative=True)[source]

Get a list of all blobs Returns a list of all blobs within this bucket.

Parameters:

relative

If true (default), print blob paths

relative to ‘build_cache’ directory.

If false, print absolute blob paths (useful for

destruction of bucket)

get_blob(blob_path)[source]
class spack.util.gcs.GCSHandler[source]

Bases: BaseHandler

gs_open(req)[source]
spack.util.gcs.gcs_client()[source]

Create a GCS client Creates an authenticated GCS client to access GCS buckets and blobs

spack.util.gcs.gcs_open(req, *args, **kwargs)[source]

Open a reader stream to a blob object on GCS

spack.util.git module

Single util module where Spack should get a git executable.

spack.util.git.git(required: bool = False)[source]

Get a git executable.

Parameters:

required – if True, fail if git is not found. By default return None.

spack.util.gpg module

spack.util.gpg.GNUPGHOME = None

GNUPGHOME environment variable in the context of this Python module

spack.util.gpg.GPG = None

Executable instance for “gpg”, initialized lazily

spack.util.gpg.GPGCONF = None

Executable instance for “gpgconf”, initialized lazily

spack.util.gpg.SOCKET_DIR = None

Socket directory required if a non default home directory is used

exception spack.util.gpg.SpackGPGError(message, long_message=None)[source]

Bases: SpackError

Class raised when GPG errors are detected.

spack.util.gpg.clear()[source]

Reset the global state to uninitialized.

spack.util.gpg.create(**kwargs)[source]

Create a new key pair.

spack.util.gpg.export_keys(location, keys, secret=False)[source]

Export public keys to a location passed as argument.

Parameters:
  • location (str) – where to export the keys

  • keys (list) – keys to be exported

  • secret (bool) – whether to export secret keys or not

spack.util.gpg.gnupghome_override(dir)[source]

Set the GNUPGHOME to a new location for this context.

Parameters:

dir (str) – new value for GNUPGHOME

spack.util.gpg.init(gnupghome=None, force=False)[source]

Initialize the global objects in the module, if not set.

When calling any gpg executable, the GNUPGHOME environment variable is set to:

  1. The value of the gnupghome argument, if not None

  2. The value of the “SPACK_GNUPGHOME” environment variable, if set

  3. The default gpg path for Spack otherwise

Parameters:
  • gnupghome (str) – value to be used for GNUPGHOME when calling GnuPG executables

  • force (bool) – if True forces the re-initialization even if the global objects are set already

spack.util.gpg.list(trusted, signing)[source]

List known keys.

Parameters:
  • trusted (bool) – if True list public keys

  • signing (bool) – if True list private keys

spack.util.gpg.public_keys(*args)[source]

Return a list of fingerprints

spack.util.gpg.public_keys_to_fingerprint(*args)[source]

Return the keys that can be used to verify binaries.

spack.util.gpg.sign(key, file, output, clearsign=False)[source]

Sign a file with a key.

Parameters:
  • key – key to be used to sign

  • file (str) – file to be signed

  • output (str) – output file (either the clearsigned file or the detached signature)

  • clearsign (bool) – if True wraps the document in an ASCII-armored signature, if False creates a detached signature

spack.util.gpg.signing_keys(*args)[source]

Return the keys that can be used to sign binaries.

spack.util.gpg.trust(keyfile)[source]

Import a public key from a file and trust it.

Parameters:

keyfile (str) – file with the public key

spack.util.gpg.untrust(signing, *keys)[source]

Delete known keys.

Parameters:
  • signing (bool) – if True deletes the secret keys

  • *keys – keys to be deleted

spack.util.gpg.verify(signature, file=None, suppress_warnings=False)[source]

Verify the signature on a file.

Parameters:
  • signature (str) – signature of the file (or clearsigned file)

  • file (str) – file to be verified. If None, then signature is assumed to be a clearsigned file.

  • suppress_warnings (bool) – whether or not to suppress warnings from GnuPG

spack.util.hash module

spack.util.hash.b32_hash(content)[source]

Return the b32 encoded sha1 hash of the input string as a string.

spack.util.hash.base32_prefix_bits(hash_string, bits)[source]

Return the first <bits> bits of a base32 string as an integer.

spack.util.ld_so_conf module

spack.util.ld_so_conf.get_conf_file_from_dynamic_linker(dynamic_linker_name)[source]
spack.util.ld_so_conf.host_dynamic_linker_search_paths()[source]

Retrieve the current host runtime search paths for shared libraries; for GNU and musl Linux we try to retrieve the dynamic linker from the current Python interpreter and then find the corresponding config file (e.g. ld.so.conf or ld-musl-<arch>.path). Similar can be done for BSD and others, but this is not implemented yet. The default paths are always returned. We don’t check if the listed directories exist.

spack.util.ld_so_conf.parse_ld_so_conf(conf_file='/etc/ld.so.conf')[source]

Parse glibc style ld.so.conf file, which specifies default search paths for the dynamic linker. This can in principle also be used for musl libc.

Parameters:

conf_file (str or bytes) – Path to config file

Returns:

List of absolute search paths

Return type:

list

spack.util.libc module

spack.util.libc.libc_from_current_python_process() Spec | None[source]
spack.util.libc.libc_from_dynamic_linker(dynamic_linker: str) Spec | None[source]
spack.util.libc.libc_include_dir_from_startfile_prefix(libc_prefix: str, startfile_prefix: str) str | None[source]

Heuristic to determine the glibc include directory from the startfile prefix. Replaces $libc_prefix/lib*/<multiarch> with $libc_prefix/include/<multiarch>. This function does not check if the include directory actually exists or is correct.

spack.util.libc.parse_dynamic_linker(output: str)[source]

Parse -dynamic-linker /path/to/ld.so from compiler output

spack.util.libc.startfile_prefix(prefix: str, compatible_with: str = '/home/docs/checkouts/readthedocs.org/user_builds/spack/envs/latest/bin/python') str | None[source]

spack.util.lock module

Wrapper for llnl.util.lock allows locking to be enabled/disabled.

class spack.util.lock.Lock(path: str, *, start: int = 0, length: int = 0, default_timeout: float | None = None, debug: bool = False, desc: str = '', enable: bool | None = None)[source]

Bases: Lock

Lock that can be disabled.

This overrides the _lock() and _unlock() methods from llnl.util.lock so that all the lock API calls will succeed, but the actual locking mechanism can be disabled via _enable_locks.

cleanup(*args) None[source]
spack.util.lock.check_lock_safety(path: str) None[source]

Do some extra checks to ensure disabling locks is safe.

This will raise an error if path can is group- or world-writable AND the current user can write to the directory (i.e., if this user AND others could write to the path).

This is intended to run on the Spack prefix, but can be run on any path for testing.

spack.util.log_parse module

spack.util.log_parse.make_log_context(log_events, width=None)[source]

Get error context from a log file.

Parameters:
  • log_events (list) – list of events created by ctest_log_parser.parse()

  • width (int or None) – wrap width; 0 for no limit; None to auto-size for terminal

Returns:

context from the build log with errors highlighted

Return type:

str

Parses the log file for lines containing errors, and prints them out with line numbers and context. Errors are highlighted with ‘>>’ and with red highlighting (if color is enabled).

Events are sorted by line number before they are displayed.

spack.util.log_parse.parse_log_events(stream, context=6, jobs=None, profile=False)[source]

Extract interesting events from a log file as a list of LogEvent.

Parameters:
  • stream (str or IO) – build log name or file object

  • context (int) – lines of context to extract around each log event

  • jobs (int) – number of jobs to parse with; default ncpus

  • profile (bool) – print out profile information for parsing

Returns:

two lists containig BuildError and

BuildWarning objects.

Return type:

(tuple)

This is a wrapper around ctest_log_parser.CTestLogParser that lazily constructs a single CTestLogParser object. This ensures that all the regex compilation is only done once.

spack.util.module_cmd module

This module contains routines related to the module command for accessing and parsing environment modules.

spack.util.module_cmd.get_path_args_from_module_line(line)[source]
spack.util.module_cmd.get_path_from_module_contents(text, module_name)[source]
spack.util.module_cmd.load_module(mod)[source]

Takes a module name and removes modules until it is possible to load that module. It then loads the provided module. Depends on the modulecmd implementation of modules used in cray and lmod.

spack.util.module_cmd.module(*args, module_template: str | None = None, environb: MutableMapping[bytes, bytes] | None = None)[source]
spack.util.module_cmd.path_from_modules(modules)[source]

Inspect a list of Tcl modules for entries that indicate the absolute path at which the library supported by said module can be found.

Parameters:

modules (list) – module files to be loaded to get an external package

Returns:

Guess of the prefix path where the package

spack.util.naming module

class spack.util.naming.NamespaceTrie(separator='.')[source]

Bases: object

class Element(value)[source]

Bases: object

has_value(namespace)[source]

True if there is a value set for the given namespace.

is_leaf(namespace)[source]

True if this namespace has no children in the trie.

is_prefix(namespace)[source]

True if the namespace has a value, or if it’s the prefix of one that does.

spack.util.naming.mod_to_class(mod_name)[source]

Convert a name from module style to class name style. Spack mostly follows PEP-8:

  • Module and package names use lowercase_with_underscores.

  • Class names use the CapWords convention.

Regular source code follows these convetions. Spack is a bit more liberal with its Package names and Compiler names:

  • They can contain ‘-’ as well as ‘_’, but cannot start with ‘-‘.

  • They can start with numbers, e.g. “3proxy”.

This function converts from the module convention to the class convention by removing _ and - and converting surrounding lowercase text to CapWords. If mod_name starts with a number, the class name returned will be prepended with ‘_’ to make a valid Python identifier.

spack.util.naming.possible_spack_module_names(python_mod_name)[source]

Given a Python module name, return a list of all possible spack module names that could correspond to it.

spack.util.naming.simplify_name(name)[source]

Simplify package name to only lowercase, digits, and dashes.

Simplifies a name which may include uppercase letters, periods, underscores, and pluses. In general, we want our package names to only contain lowercase letters, digits, and dashes.

Parameters:

name (str) – The original name of the package

Returns:

The new name of the package

Return type:

str

spack.util.naming.spack_module_to_python_module(mod_name)[source]

Given a Spack module name, returns the name by which it can be imported in Python.

spack.util.naming.valid_fully_qualified_module_name(mod_name)[source]

Return whether mod_name is a valid namespaced module name.

spack.util.naming.valid_module_name(mod_name)[source]

Return whether mod_name is valid for use in Spack.

spack.util.naming.validate_fully_qualified_module_name(mod_name)[source]

Raise an exception if mod_name is not a valid namespaced module name.

spack.util.naming.validate_module_name(mod_name)[source]

Raise an exception if mod_name is not valid.

spack.util.package_hash module

exception spack.util.package_hash.PackageHashError(message, long_message=None)[source]

Bases: SpackError

Raised for all errors encountered during package hashing.

class spack.util.package_hash.RemoveDirectives(spec)[source]

Bases: NodeTransformer

Remove Spack directives from a package AST.

This removes Spack directives (e.g., depends_on, conflicts, etc.) and metadata attributes (e.g., tags, homepage, url) in a top-level class definition within a package.py, but it does not modify nested classes or functions.

If removing directives causes a for, with, or while statement to have an empty body, we remove the entire statement. Similarly, If removing directives causes an if statement to have an empty body or else block, we’ll remove the block (or replace the body with pass if there is an else block but no body).

visit_Assign(node)[source]
visit_ClassDef(node)[source]
visit_Expr(node)[source]
visit_For(node)[source]
visit_FunctionDef(node)[source]
visit_If(node)[source]
visit_While(node)[source]
visit_With(node)[source]
class spack.util.package_hash.RemoveDocstrings[source]

Bases: NodeTransformer

Transformer that removes docstrings from a Python AST.

This removes all strings that aren’t on the RHS of an assignment statement from the body of functions, classes, and modules – even if they’re not directly after the declaration.

remove_docstring(node)[source]
visit_ClassDef(node)[source]
visit_FunctionDef(node)[source]
visit_Module(node)[source]
class spack.util.package_hash.ResolveMultiMethods(methods)[source]

Bases: NodeTransformer

Remove multi-methods when we know statically that they won’t be used.

Say we have multi-methods like this:

class SomePackage:
    def foo(self): print("implementation 1")

    @when("@1.0")
    def foo(self): print("implementation 2")

    @when("@2.0")
    @when(sys.platform == "darwin")
    def foo(self): print("implementation 3")

    @when("@3.0")
    def foo(self): print("implementation 4")

The multimethod that will be chosen at runtime depends on the package spec and on whether we’re on the darwin platform at build time (the darwin condition for implementation 3 is dynamic). We know the package spec statically; we don’t know statically what the runtime environment will be. We need to include things that can possibly affect package behavior in the package hash, and we want to exclude things when we know that they will not affect package behavior.

If we’re at version 4.0, we know that implementation 1 will win, because some @when for 2, 3, and 4 will be False. We should only include implementation 1.

If we’re at version 1.0, we know that implementation 2 will win, because it overrides implementation 1. We should only include implementation 2.

If we’re at version 3.0, we know that implementation 4 will win, because it overrides implementation 1 (the default), and some @when on all others will be False.

If we’re at version 2.0, it’s a bit more complicated. We know we can remove implementations 2 and 4, because their @when’s will never be satisfied. But, the choice between implementations 1 and 3 will happen at runtime (this is a bad example because the spec itself has platform information, and we should prefer to use that, but we allow arbitrary boolean expressions in @when’s, so this example suffices). For this case, we end up needing to include both implementation 1 and 3 in the package hash, because either could be chosen.

resolve(impl_conditions)[source]

Given list of nodes and conditions, figure out which node will be chosen.

visit_FunctionDef(func)[source]
class spack.util.package_hash.TagMultiMethods(spec)[source]

Bases: NodeVisitor

Tag @when-decorated methods in a package AST.

visit_FunctionDef(func)[source]
spack.util.package_hash.canonical_source(spec, filter_multimethods=True, source=None)[source]

Get canonical source for a spec’s package.py by unparsing its AST.

Parameters:
  • filter_multimethods (bool) – By default, filter multimethods out of the AST if they are known statically to be unused. Supply False to disable.

  • source (str) – Optionally provide a string to read python code from.

spack.util.package_hash.package_ast(spec, filter_multimethods=True, source=None)[source]

Get the AST for the package.py file corresponding to spec.

Parameters:
  • filter_multimethods (bool) – By default, filter multimethods out of the AST if they are known statically to be unused. Supply False to disable.

  • source (str) – Optionally provide a string to read python code from.

spack.util.package_hash.package_hash(spec, source=None)[source]

Get a hash of a package’s canonical source code.

This function is used to determine whether a spec needs a rebuild when a package’s source code changes.

Parameters:

source (str) – Optionally provide a string to read python code from.

spack.util.parallel module

class spack.util.parallel.ErrorFromWorker(exc_cls, exc, tb)[source]

Bases: object

Wrapper class to report an error from a worker process

property stacktrace
class spack.util.parallel.Task(func)[source]

Bases: object

Wrapped task that trap every Exception and return it as an ErrorFromWorker object.

We are using a wrapper class instead of a decorator since the class is pickleable, while a decorator with an inner closure is not.

spack.util.parallel.imap_unordered(f, list_of_args, *, processes: int, maxtaskperchild: int | None = None, debug=False)[source]

Wrapper around multiprocessing.Pool.imap_unordered.

Parameters:
  • f – function to apply

  • list_of_args – list of tuples of args for the task

  • processes – maximum number of processes allowed

  • debug – if False, raise an exception containing just the error messages from workers, if True an exception with complete stacktraces

  • maxtaskperchild – number of tasks to be executed by a child before being killed and substituted

Raises:

RuntimeError – if any error occurred in the worker processes

spack.util.path module

Utilities for managing paths in Spack.

TODO: this is really part of spack.config. Consolidate it.

spack.util.path.canonicalize_path(path, default_wd=None)[source]

Same as substitute_path_variables, but also take absolute path.

If the string is a yaml object with file annotations, make absolute paths relative to that file’s directory. Otherwise, use default_wd if specified, otherwise os.getcwd()

Parameters:

path (str) – path being converted as needed

Returns:

An absolute path with path variable substitution

Return type:

(str)

spack.util.path.substitute_config_variables(path)[source]

Substitute placeholders into paths.

Spack allows paths in configs to have some placeholders, as follows:

  • $env The active Spack environment.

  • $spack The Spack instance’s prefix

  • $tempdir Default temporary directory returned by tempfile.gettempdir()

  • $user The current user’s username

  • $user_cache_path The user cache directory (~/.spack, unless overridden)

  • $architecture The spack architecture triple for the current system

  • $arch The spack architecture triple for the current system

  • $platform The spack platform for the current system

  • $os The OS of the current system

  • $operating_system The OS of the current system

  • $target The ISA target detected for the system

  • $target_family The family of the target detected for the system

  • $date The current date (YYYY-MM-DD)

These are substituted case-insensitively into the path, and users can use either $var or ${var} syntax for the variables. $env is only replaced if there is an active environment, and should only be used in environment yaml files.

spack.util.path.substitute_path_variables(path)[source]

Substitute config vars, expand environment vars, expand user home.

spack.util.pattern module

class spack.util.pattern.Args(*flags, **kwargs)[source]

Bases: Bunch

Subclass of Bunch to write argparse args more naturally.

class spack.util.pattern.Bunch(**kwargs)[source]

Bases: object

Carries a bunch of named attributes (from Alex Martelli bunch)

class spack.util.pattern.Composite(fns_to_delegate)[source]

Bases: list

class spack.util.pattern.Delegate(name, container)[source]

Bases: object

spack.util.pattern.composite(interface=None, method_list=None, container=<class 'list'>)[source]

Decorator implementing the GoF composite pattern.

Parameters:
  • interface (type) – class exposing the interface to which the composite object must conform. Only non-private and non-special methods will be taken into account

  • method_list (list) – names of methods that should be part of the composite

  • container (collections.abc.MutableSequence) – container for the composite object (default = list). Must fulfill the MutableSequence contract. The composite class will expose the container API to manage object composition

Returns:

a class decorator that patches a class adding all the methods it needs to be a composite for a given interface.

spack.util.prefix module

This file contains utilities for managing the installation prefix of a package.

class spack.util.prefix.Prefix[source]

Bases: str

This class represents an installation prefix, but provides useful attributes for referring to directories inside the prefix.

Attributes of this object are created on the fly when you request them, so any of the following are valid:

>>> prefix = Prefix("/usr")
>>> prefix.bin
/usr/bin
>>> prefix.lib64
/usr/lib64
>>> prefix.share.man
/usr/share/man
>>> prefix.foo.bar.baz
/usr/foo/bar/baz
>>> prefix.join("dashed-directory").bin64
/usr/dashed-directory/bin64

Prefix objects behave identically to strings. In fact, they subclass str, so operators like + are legal:

print("foobar " + prefix)

This prints foobar /usr. All of this is meant to make custom installs easy.

join(string: str) Prefix[source]

Concatenate a string to a prefix.

Useful for strings that are not valid variable names. This includes strings containing characters like - and ..

Parameters:

string – the string to append to the prefix

Returns:

the newly created installation prefix

spack.util.s3 module

class spack.util.s3.UrllibS3Handler[source]

Bases: BaseHandler

s3_open(req)[source]
class spack.util.s3.WrapStream(raw)[source]

Bases: BufferedReader

detach()[source]

Disconnect this buffer from its underlying raw stream and return it.

After the raw stream has been detached, the buffer is in an unusable state.

read(*args, **kwargs)[source]

Read and return up to n bytes.

If the argument is omitted, None, or negative, reads and returns all data until EOF.

If the argument is positive, and the underlying raw stream is not ‘interactive’, multiple raw reads may be issued to satisfy the byte count (unless EOF is reached first). But for interactive raw streams (as well as sockets and pipes), at most one raw read will be issued, and a short result does not imply that EOF is imminent.

Returns an empty bytes object on EOF.

Returns None if the underlying raw stream was open in non-blocking mode and no data is available at the moment.

spack.util.s3.get_mirror_s3_connection_info(mirror, method)[source]

Create s3 config for session/client from a Mirror instance (or just set defaults when no mirror is given.)

spack.util.s3.get_s3_session(url, method='fetch')[source]
spack.util.s3.s3_client_cache: Dict[Tuple[str, str], Any] = {}

Map (mirror name, method) tuples to s3 client instances.

spack.util.spack_json module

Simple wrapper around JSON to guarantee consistent use of load/dump.

exception spack.util.spack_json.SpackJSONError(msg: str, json_error: BaseException)[source]

Bases: SpackError

Raised when there are issues with JSON parsing.

spack.util.spack_json.dump(data: Dict, stream: Any | None = None) str | None[source]

Dump JSON with a reasonable amount of indentation and separation.

spack.util.spack_json.load(stream: Any) Dict[source]

Spack JSON needs to be ordered to support specs.

spack.util.spack_yaml module

Enhanced YAML parsing for Spack.

  • load() preserves YAML Marks on returned objects – this allows us to access file and line information later.

  • Our load methods use ``OrderedDict class instead of YAML’s default unorderd dict.

exception spack.util.spack_yaml.SpackYAMLError(msg, yaml_error)[source]

Bases: SpackError

Raised when there are issues with YAML parsing.

spack.util.spack_yaml.dump(data, stream=None, default_flow_style=False)[source]
spack.util.spack_yaml.load(*args, **kwargs)[source]

spack.util.timer module

Debug signal handler: prints a stack trace and enters interpreter.

register_interrupt_handler() enables a ctrl-C handler that prints a stack trace and drops the user into an interpreter.

class spack.util.timer.BaseTimer[source]

Bases: object

duration(name=None)[source]
measure(name)[source]
property phases
start(name=None)[source]
stop(name=None)[source]
write_json(out=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>)[source]
write_tty(out=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>)[source]
spack.util.timer.NULL_TIMER = <spack.util.timer.NullTimer object>

instance of a do-nothing timer

class spack.util.timer.NullTimer[source]

Bases: BaseTimer

Timer interface that does nothing, useful in for “tell don’t ask” style code when timers are optional.

class spack.util.timer.TimeTracker(total, start, count, path)

Bases: tuple

count

Alias for field number 2

path

Alias for field number 3

start

Alias for field number 1

total

Alias for field number 0

class spack.util.timer.Timer(now: ~typing.Callable[[], float] = <built-in function time>)[source]

Bases: BaseTimer

Simple interval timer

duration(name='_global')[source]

Get the time in seconds of a named timer, or the total time if no name is passed. The duration is always 0 for timers that have not been started, no error is raised.

Parameters:

name (str) – (Optional) name of the timer

Returns:

duration of timer.

Return type:

float

measure(name)[source]

Context manager that allows you to time a block of code.

Parameters:

name (str) – Name of the timer

property phases

Get all named timers (excluding the global/total timer)

start(name='_global')[source]

Start or restart a named timer, or the global timer when no name is given.

Parameters:

name (str) – Optional name of the timer. When no name is passed, the global timer is started.

stop(name='_global')[source]

Stop a named timer, or all timers when no name is given. Stopping a timer that has not started has no effect.

Parameters:

name (str) – Optional name of the timer. When no name is passed, all timers are stopped.

write_json(out=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>, extra_attributes={})[source]

Write a json object with times to file

write_tty(out=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>)[source]

Write a human-readable summary of timings (depth is 1)

class spack.util.timer.TimerEvent(time, running, label)

Bases: tuple

label

Alias for field number 2

running

Alias for field number 1

time

Alias for field number 0

spack.util.timer.global_timer_name = '_global'

name for the global timer (used in start(), stop(), duration() without arguments)

spack.util.url module

Utility functions for parsing, formatting, and manipulating URLs.

spack.util.url.default_download_filename(url: str) str[source]

This method computes a default file name for a given URL. Note that it makes no request, so this is not the same as the option curl -O, which uses the remote file name from the response header.

spack.util.url.file_url_string_to_path(url)[source]
spack.util.url.format(parsed_url)[source]

Format a URL string

Returns a canonicalized format of the given URL as a string.

spack.util.url.is_path_instead_of_url(path_or_url)[source]

Historically some config files and spack commands used paths where urls should be used. This utility can be used to validate and promote paths to urls.

spack.util.url.join(base_url, path, *extra, **kwargs)[source]

Joins a base URL with one or more local URL path components

If resolve_href is True, treat the base URL as though it where the locator of a web page, and the remaining URL path components as though they formed a relative URL to be resolved against it (i.e.: as in posixpath.join(…)). The result is an absolute URL to the resource to which a user’s browser would navigate if they clicked on a link with an “href” attribute equal to the relative URL.

If resolve_href is False (default), then the URL path components are joined as in posixpath.join().

Note: file:// URL path components are not canonicalized as part of this operation. To canonicalize, pass the joined url to format().

Examples

base_url = ‘s3://bucket/index.html’ body = fetch_body(prefix) link = get_href(body) # link == ‘../other-bucket/document.txt’

# wrong - link is a local URL that needs to be resolved against base_url spack.util.url.join(base_url, link) ‘s3://bucket/other_bucket/document.txt’

# correct - resolve local URL against base_url spack.util.url.join(base_url, link, resolve_href=True) ‘s3://other_bucket/document.txt’

prefix = ‘https://mirror.spack.io/build_cache

# wrong - prefix is just a URL prefix spack.util.url.join(prefix, ‘my-package’, resolve_href=True) ‘https://mirror.spack.io/my-package

# correct - simply append additional URL path components spack.util.url.join(prefix, ‘my-package’, resolve_href=False) # default ‘https://mirror.spack.io/build_cache/my-package

# For canonicalizing file:// URLs, take care to explicitly differentiate # between absolute and relative join components.

spack.util.url.local_file_path(url)[source]

Get a local file path from a url.

If url is a file:// URL, return the absolute path to the local file or directory referenced by it. Otherwise, return None.

Return the next link from a Link header value, if any.

spack.util.url.path_to_file_url(path)[source]
spack.util.url.validate_scheme(scheme)[source]

Returns true if the URL scheme is generally known to Spack. This function helps mostly in validation of paths vs urls, as Windows paths such as C:/x/y/z (with backward not forward slash) may parse as a URL with scheme C and path /x/y/z.

spack.util.web module

exception spack.util.web.DetailedHTTPError(req: Request, code: int, msg: str, hdrs: Message, fp: IO | None)[source]

Bases: HTTPError

class spack.util.web.ExtractMetadataParser[source]

Bases: HTMLParser

This parser takes an HTML page and selects the include-fragments, used on GitHub, https://github.github.io/include-fragment-element, as well as a possible base url.

handle_starttag(tag, attrs)[source]
exception spack.util.web.HTMLParseError[source]

Bases: Exception

class spack.util.web.LinkParser[source]

Bases: HTMLParser

This parser just takes an HTML page and strips out the hrefs on the links. Good enough for a really simple spider.

handle_starttag(tag, attrs)[source]
exception spack.util.web.NoNetworkConnectionError(message, url)[source]

Bases: SpackWebError

Raised when an operation can’t get an internet connection.

spack.util.web.SPACK_USER_AGENT = 'Spackbot/0.23.0.dev0'

User-Agent used in Request objects

class spack.util.web.SpackHTTPDefaultErrorHandler[source]

Bases: HTTPDefaultErrorHandler

http_error_default(req, fp, code, msg, hdrs)[source]
exception spack.util.web.SpackWebError(message, long_message=None)[source]

Bases: SpackError

Superclass for Spack web spidering errors.

spack.util.web.base_curl_fetch_args(url, timeout=0)[source]

Return the basic fetch arguments typically used in calls to curl.

The arguments include those for ensuring behaviors such as failing on errors for codes over 400, printing HTML headers, resolving 3xx redirects, status or failure handling, and connection timeouts.

It also uses the following configuration option to set an additional argument as needed:

  • config:connect_timeout (int): connection timeout

  • config:verify_ssl (str): Perform SSL verification

Parameters:
  • url (str) – URL whose contents will be fetched

  • timeout (int) – Connection timeout, which is only used if higher than config:connect_timeout

Returns (list): list of argument strings

spack.util.web.check_curl_code(returncode)[source]

Check standard return code failures for provided arguments.

Parameters:

returncode (int) – curl return code

Raises FetchError if the curl returncode indicates failure

spack.util.web.custom_ssl_certs() Tuple[bool, str] | None[source]

Returns a tuple (is_file, path) if custom SSL certifates are configured and valid.

spack.util.web.fetch_url_text(url, curl=None, dest_dir='.')[source]

Retrieves text-only URL content using the configured fetch method. It determines the fetch method from:

  • config:url_fetch_method (str): fetch method to use (e.g., ‘curl’)

If the method is curl, it also uses the following configuration options:

  • config:connect_timeout (int): connection time out

  • config:verify_ssl (str): Perform SSL verification

Parameters:
  • url (str) – URL whose contents are to be fetched

  • curl (spack.util.executable.Executable or None) – (optional) curl executable if curl is the configured fetch method

  • dest_dir (str) – (optional) destination directory for fetched text file

Returns (str or None): path to the fetched file

Raises FetchError if the curl returncode indicates failure

spack.util.web.get_header(headers, header_name)[source]

Looks up a dict of headers for the given header value.

Looks up a dict of headers, [headers], for a header value given by [header_name]. Returns headers[header_name] if header_name is in headers. Otherwise, the first fuzzy match is returned, if any.

This fuzzy matching is performed by discarding word separators and capitalization, so that for example, “Content-length”, “content_length”, “conTENtLength”, etc., all match. In the case of multiple fuzzy-matches, the returned value is the “first” such match given the underlying mapping’s ordering, or unspecified if no such ordering is defined.

If header_name is not in headers, and no such fuzzy match exists, then a KeyError is raised.

spack.util.web.list_url(url, recursive=False)[source]
spack.util.web.parse_etag(header_value)[source]

Parse a strong etag from an ETag: <value> header value. We don’t allow for weakness indicators because it’s unclear what that means for cache invalidation.

spack.util.web.push_to_url(local_file_path, remote_path, keep_original=True, extra_args=None)[source]
spack.util.web.read_from_url(url, accept_content_type=None)[source]
spack.util.web.remove_url(url, recursive=False)[source]
spack.util.web.set_curl_env_for_ssl_certs(curl: Executable) None[source]

configure curl to use custom certs in a file at runtime. See: https://curl.se/docs/sslcerts.html item 4

spack.util.web.spider(root_urls: str | Iterable[str], depth: int = 0, concurrency: int | None = None)[source]

Get web pages from root URLs.

If depth is specified (e.g., depth=2), then this will also follow up to <depth> levels of links from each root.

Parameters:
  • root_urls – root urls used as a starting point for spidering

  • depth – level of recursion into links

  • concurrency – number of simultaneous requests that can be sent

Returns:

A dict of pages visited (URL) mapped to their full text and the set of visited links.

spack.util.web.ssl_create_default_context()[source]

Create the default SSL context for urllib with custom certificates if configured.

spack.util.web.url_exists(url, curl=None)[source]

Determines whether url exists.

A scheme-specific process is used for Google Storage (gs) and Amazon Simple Storage Service (s3) URLs; otherwise, the configured fetch method defined by config:url_fetch_method is used.

Parameters:

Returns (bool): True if it exists; False otherwise.

spack.util.web.urlopen = <function _urlopen.<locals>.dispatch_open>

Dispatches to the correct OpenerDirector.open, based on Spack configuration.

spack.util.windows_registry module

Utility module for dealing with Windows Registry.

class spack.util.windows_registry.HKEY[source]

Bases: object

Predefined, open registry HKEYs From the Microsoft docs: An application must open a key before it can read data from the registry. To open a key, an application must supply a handle to another key in the registry that is already open. The system defines predefined keys that are always open. Predefined keys help an application navigate in the registry.

HKEY_CLASSES_ROOT = <spack.util.windows_registry._HKEY_CONSTANT object>
HKEY_CURRENT_CONFIG = <spack.util.windows_registry._HKEY_CONSTANT object>
HKEY_CURRENT_USER = <spack.util.windows_registry._HKEY_CONSTANT object>
HKEY_LOCAL_MACHINE = <spack.util.windows_registry._HKEY_CONSTANT object>
HKEY_PERFORMANCE_DATA = <spack.util.windows_registry._HKEY_CONSTANT object>
HKEY_USERS = <spack.util.windows_registry._HKEY_CONSTANT object>
exception spack.util.windows_registry.InvalidKeyError(key)[source]

Bases: RegistryError

Runtime Error describing issue with invalid key access to Windows registry

exception spack.util.windows_registry.InvalidRegistryOperation(name, e, *args, **kwargs)[source]

Bases: RegistryError

A Runtime Error ecountered when a registry operation is invalid for an indeterminate reason

exception spack.util.windows_registry.RegistryError[source]

Bases: Exception

RunTime Error concerning the Windows Registry

class spack.util.windows_registry.RegistryKey(name, handle)[source]

Bases: object

Class wrapping a Windows registry key

EnumKey(index)[source]

Convenience wrapper around winreg.EnumKey

EnumValue(index)[source]

Convenience wrapper around winreg.EnumValue

OpenKeyEx(subname, **kwargs)[source]

Convenience wrapper around winreg.OpenKeyEx

QueryInfoKey()[source]

Convenience wrapper around winreg.QueryInfoKey

QueryValueEx(name, **kwargs)[source]

Convenience wrapper around winreg.QueryValueEx

get_subkey(sub_key)[source]

Returns subkey of name sub_key in a RegistryKey objects

get_value(val_name)[source]

Returns value associated with this key in RegistryValue object

property hkey
property subkeys

Returns list of all subkeys of this key as RegistryKey objects

property values

Returns all subvalues of this key as RegistryValue objects in dictionary of value name : RegistryValue object

winreg_error_handler(name, *args, **kwargs)[source]
class spack.util.windows_registry.RegistryValue(name, value, parent_key)[source]

Bases: object

Class defining a Windows registry entry

class spack.util.windows_registry.WindowsRegistryView(key, root_key=<spack.util.windows_registry._HKEY_CONSTANT object>)[source]

Bases: object

Interface to provide access, querying, and searching to Windows registry entries. This class represents a single key entrypoint into the Windows registry and provides an interface to this key’s values, its subkeys, and those subkey’s values. This class cannot be used to move freely about the registry, only subkeys/values of the root key used to instantiate this class.

class KeyMatchConditions[source]

Bases: object

static name_matcher(subkey_name)[source]
static regex_matcher(subkey_name)[source]
find_matching_subkey(subkey_name, recursive=True)[source]

Perform a BFS of subkeys until a key matching subkey name regex is found Returns None or the first RegistryKey object corresponding to requested key name

Parameters:
  • subkey_name (str) – subkey to be searched for

  • recursive (bool) – perform a recursive search

Returns:

the desired subkey as a RegistryKey object, or none

find_subkey(subkey_name, recursive=True)[source]

Perform a BFS of subkeys until desired key is found Returns None or RegistryKey object corresponding to requested key name

Parameters:
  • subkey_name (str) – subkey to be searched for

  • recursive (bool) – perform a recursive search

Returns:

the desired subkey as a RegistryKey object, or none

find_subkeys(subkey_name, recursive=True)[source]

Exactly the same as find_subkey, except this function tries to match a regex to multiple keys

Parameters:

subkey_name (str) –

Returns:

the desired subkeys as a list of RegistryKey object, or none

find_value(val_name, recursive=True)[source]

If non recursive, return RegistryValue object corresponding to name

Parameters:
  • val_name (str) – name of value desired from registry

  • recursive (bool) – optional argument, if True, the registry is searched recursively for the value of name val_name, else only the current key is searched

Returns:

The desired registry value as a RegistryValue object if it exists, otherwise, None

get_matching_subkeys(subkey_name)[source]

Returns all subkeys regex matching subkey name

Note: this method obtains only direct subkeys of the given key and does not desced to transtitve subkeys. For this behavior, see find_matching_subkeys

get_subkey(subkey_name)[source]
get_subkeys()[source]
get_value(value_name)[source]

Return registry value corresponding to provided argument (if it exists)

get_values()[source]
invalid_reg_ref_error_handler()[source]
property reg