OXIESEC PANEL
- Current Dir:
/
/
opt
/
gsutil
/
gslib
/
commands
/
__pycache__
Server IP: 2a02:4780:11:1594:0:ef5:22d7:a
Upload:
Create Dir:
Name
Size
Modified
Perms
📁
..
-
02/11/2025 08:19:49 AM
rwxr-xr-x
📄
__init__.cpython-39.pyc
323 bytes
02/11/2025 08:19:49 AM
rw-r--r--
📄
acl.cpython-39.pyc
18.76 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
autoclass.cpython-39.pyc
6.01 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
bucketpolicyonly.cpython-39.pyc
6.75 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
cat.cpython-39.pyc
4.17 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
compose.cpython-39.pyc
4.57 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
config.cpython-39.pyc
40.97 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
cors.cpython-39.pyc
6.56 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
cp.cpython-39.pyc
42.21 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
defacl.cpython-39.pyc
11.25 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
defstorageclass.cpython-39.pyc
5.78 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
du.cpython-39.pyc
8.47 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
hash.cpython-39.pyc
7.87 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
help.cpython-39.pyc
6.41 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
hmac.cpython-39.pyc
12.39 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
iam.cpython-39.pyc
24.6 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
kms.cpython-39.pyc
14.88 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
label.cpython-39.pyc
10.46 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
lifecycle.cpython-39.pyc
6.37 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
logging.cpython-39.pyc
9.04 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
ls.cpython-39.pyc
18.81 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
mb.cpython-39.pyc
12.23 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
mv.cpython-39.pyc
5.06 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
notification.cpython-39.pyc
25.47 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
pap.cpython-39.pyc
6.53 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
perfdiag.cpython-39.pyc
62.18 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
rb.cpython-39.pyc
3.78 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
requesterpays.cpython-39.pyc
5.53 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
retention.cpython-39.pyc
20.02 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
rewrite.cpython-39.pyc
16.12 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
rm.cpython-39.pyc
10.86 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
rpo.cpython-39.pyc
5.96 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
rsync.cpython-39.pyc
53.86 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
setmeta.cpython-39.pyc
11.25 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
signurl.cpython-39.pyc
20.99 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
stat.cpython-39.pyc
5.04 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
test.cpython-39.pyc
17.73 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
ubla.cpython-39.pyc
6.96 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
update.cpython-39.pyc
12.26 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
version.cpython-39.pyc
5.25 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
versioning.cpython-39.pyc
5.51 KB
02/11/2025 08:19:49 AM
rw-r--r--
📄
web.cpython-39.pyc
7.63 KB
02/11/2025 08:19:49 AM
rw-r--r--
Editing: cp.cpython-39.pyc
Close
a +(Wg�� � @ s� d Z ddlmZ ddlmZ ddlmZ ddlmZ ddlZddlZddlZddl Z ddl Z ddlZddlm Z ddlmZ dd lmZ dd lmZ ddlmZ ddlmZ dd lmZ ddlmZ ddlmZ ddlmZ ddlmZ ddlmZ ddl m!Z! ddl m"Z" ddl m#Z# ddl$m%Z& ddl'm(Z( ddl'm)Z) ddl'm*Z* ddl+m,Z, ddl-m.Z. ddl-m/Z/ ddl0m1Z1 ddl0m2Z2 ddl0m3Z3 dd l0m4Z4 dd!l0m5Z5 dd"l0m6Z6 dd#l7m8Z8 dd$l7m9Z9 dd%l7m:Z: dd&l7m;Z; dd'l7m<Z< dd(l7m=Z= dd)l>m?Z? dd*l>m@Z@ dd+lAmBZB dd,lAmCZC dd-lAmDZD dd.lEmFZF dd/lEmGZG dd0lHmIZI dd1lHmJZJ d2ZKd3eK ZLd4ZMd5ZNd6ZOd7ZPd8ZQd9ZRd:ZSd;ZTd<ZUd=ZVd>ZWd?ZXd@ZYdA�ZeLeMeNeOePeQeReSeTeUeVeWeXeYg�Z[dBZ\e?dC�e?dD�e?dE�e?dF�e?dG�e?dH�e?dI�e?dJ�e?dK�e?dL�e?dM�e?dN�e?dO�e?dP�e?dQ�e?dR�dS�Z]dTdU� e^e]�_� �dVe?dV�fdWe?dV�fg D �Z`dXdY� ZadbdZd[�Zbd\d]� Zcd^d_� ZdG d`da� dae�ZedS )czCImplementation of Unix-like cp command for cloud storage providers.� )�absolute_import)�print_function)�division)�unicode_literalsN)�encoding)�gcs_json_api)�Command)�CommandArgument)�ApiSelector)�CommandException)�LogPerformanceSummaryParams)�CopyObjectsIterator)�DestinationInfo)�NameExpansionIterator)�%NameExpansionIteratorDestinationTuple)�SeekAheadNameExpansionIterator)�ContainsWildcard)�IsCloudSubdirPlaceholder)�StorageUrlFromString)�storage_v1_messages)� cat_helper)�copy_helper)�parallelism_framework_util)�GetCloudApiInstance)�DEBUGLEVEL_DUMP_REQUESTS)�NO_MAX)�CreateCopyHelperOpts)�GetSourceFieldsNeededForCopy)�GZIP_ALL_FILES)�ItemExistsError)�Manifest)�SkipUnsupportedObjectError)�ConvertModeToBase8)�+DeserializeFileAttributesFromObjectMetadata)�InitializePreservePosixData)�POSIXAttributes)�'SerializeFileAttributesToObjectMetadata)�ValidateFilePermissionAccess)�GcloudStorageFlag)�GcloudStorageMap)�GetStreamFromFileUrl)� StdinIterator)�StdinIteratorCls)�NormalizeStorageClass)�RemoveCRLFFromString)�CalculateThroughput)�MakeHumanReadablezw gsutil cp [OPTION]... src_url dst_url gsutil cp [OPTION]... src_url... dst_url gsutil cp [OPTION]... -I dst_url z <B>SYNOPSIS</B> an <B>DESCRIPTION</B> The ``gsutil cp`` command allows you to copy data between your local file system and the cloud, within the cloud, and between cloud storage providers. For example, to upload all text files from the local directory to a bucket, you can run: gsutil cp *.txt gs://my-bucket You can also download data from a bucket. The following command downloads all text files from the top-level of a bucket to your current directory: gsutil cp gs://my-bucket/*.txt . You can use the ``-n`` option to prevent overwriting the content of existing files. The following example downloads text files from a bucket without clobbering the data in your directory: gsutil cp -n gs://my-bucket/*.txt . Use the ``-r`` option to copy an entire directory tree. For example, to upload the directory tree ``dir``: gsutil cp -r dir gs://my-bucket If you have a large number of files to transfer, you can perform a parallel multi-threaded/multi-processing copy using the top-level gsutil ``-m`` option (see "gsutil help options"): gsutil -m cp -r dir gs://my-bucket You can use the ``-I`` option with ``stdin`` to specify a list of URLs to copy, one per line. This allows you to use gsutil in a pipeline to upload or download objects as generated by a program: cat filelist | gsutil -m cp -I gs://my-bucket or: cat filelist | gsutil -m cp -I ./download_dir where the output of ``cat filelist`` is a list of files, cloud URLs, and wildcards of files and cloud URLs. NOTE: Shells like ``bash`` and ``zsh`` sometimes attempt to expand wildcards in ways that can be surprising. You may also encounter issues when attempting to copy files whose names contain wildcard characters. For more details about these issues, see `Wildcard behavior considerations <https://cloud.google.com/storage/docs/wildcards#surprising-behavior>`_. ah <B>HOW NAMES ARE CONSTRUCTED</B> The ``gsutil cp`` command attempts to name objects in ways that are consistent with the Linux ``cp`` command. This means that names are constructed depending on whether you're performing a recursive directory copy or copying individually-named objects, or whether you're copying to an existing or non-existent directory. When you perform recursive directory copies, object names are constructed to mirror the source directory structure starting at the point of recursive processing. For example, if ``dir1/dir2`` contains the file ``a/b/c``, then the following command creates the object ``gs://my-bucket/dir2/a/b/c``: gsutil cp -r dir1/dir2 gs://my-bucket In contrast, copying individually-named files results in objects named by the final path component of the source files. For example, assuming again that ``dir1/dir2`` contains ``a/b/c``, the following command creates the object ``gs://my-bucket/c``: gsutil cp dir1/dir2/** gs://my-bucket Note that in the above example, the '**' wildcard matches all names anywhere under ``dir``. The wildcard '*' matches names just one level deep. For more details, see `URI wildcards <https://cloud.google.com/storage/docs/wildcards#surprising-behavior>`_. The same rules apply for uploads and downloads: recursive copies of buckets and bucket subdirectories produce a mirrored filename structure, while copying individually or wildcard-named objects produce flatly-named files. In addition, the resulting names depend on whether the destination subdirectory exists. For example, if ``gs://my-bucket/subdir`` exists as a subdirectory, the following command creates the object ``gs://my-bucket/subdir/dir2/a/b/c``: gsutil cp -r dir1/dir2 gs://my-bucket/subdir In contrast, if ``gs://my-bucket/subdir`` does not exist, this same ``gsutil cp`` command creates the object ``gs://my-bucket/subdir/a/b/c``. NOTE: The `Google Cloud Platform Console <https://console.cloud.google.com>`_ creates folders by creating "placeholder" objects that end with a "/" character. gsutil skips these objects when downloading from the cloud to the local file system, because creating a file that ends with a "/" is not allowed on Linux and macOS. We recommend that you only create objects that end with "/" if you don't intend to download such objects using gsutil. a� <B>COPYING TO/FROM SUBDIRECTORIES; DISTRIBUTING TRANSFERS ACROSS MACHINES</B> You can use gsutil to copy to and from subdirectories by using a command like this: gsutil cp -r dir gs://my-bucket/data This causes ``dir`` and all of its files and nested subdirectories to be copied under the specified destination, resulting in objects with names like ``gs://my-bucket/data/dir/a/b/c``. Similarly, you can download from bucket subdirectories using the following command: gsutil cp -r gs://my-bucket/data dir This causes everything nested under ``gs://my-bucket/data`` to be downloaded into ``dir``, resulting in files with names like ``dir/data/a/b/c``. Copying subdirectories is useful if you want to add data to an existing bucket directory structure over time. It's also useful if you want to parallelize uploads and downloads across multiple machines (potentially reducing overall transfer time compared with running ``gsutil -m cp`` on one machine). For example, if your bucket contains this structure: gs://my-bucket/data/result_set_01/ gs://my-bucket/data/result_set_02/ ... gs://my-bucket/data/result_set_99/ you can perform concurrent downloads across 3 machines by running these commands on each machine, respectively: gsutil -m cp -r gs://my-bucket/data/result_set_[0-3]* dir gsutil -m cp -r gs://my-bucket/data/result_set_[4-6]* dir gsutil -m cp -r gs://my-bucket/data/result_set_[7-9]* dir Note that ``dir`` could be a local directory on each machine, or a directory mounted off of a shared file server. The performance of the latter depends on several factors, so we recommend experimenting to find out what works best for your computing environment. a� <B>COPYING IN THE CLOUD AND METADATA PRESERVATION</B> If both the source and destination URL are cloud URLs from the same provider, gsutil copies data "in the cloud" (without downloading to and uploading from the machine where you run gsutil). In addition to the performance and cost advantages of doing this, copying in the cloud preserves metadata such as ``Content-Type`` and ``Cache-Control``. In contrast, when you download data from the cloud, it ends up in a file with no associated metadata, unless you have some way to keep or re-create that metadata. Copies spanning locations and/or storage classes cause data to be rewritten in the cloud, which may take some time (but is still faster than downloading and re-uploading). Such operations can be resumed with the same command if they are interrupted, so long as the command parameters are identical. Note that by default, the gsutil ``cp`` command does not copy the object ACL to the new object, and instead uses the default bucket ACL (see "gsutil help defacl"). You can override this behavior with the ``-p`` option. When copying in the cloud, if the destination bucket has Object Versioning enabled, by default ``gsutil cp`` copies only live versions of the source object. For example, the following command causes only the single live version of ``gs://bucket1/obj`` to be copied to ``gs://bucket2``, even if there are noncurrent versions of ``gs://bucket1/obj``: gsutil cp gs://bucket1/obj gs://bucket2 To also copy noncurrent versions, use the ``-A`` flag: gsutil cp -A gs://bucket1/obj gs://bucket2 The top-level gsutil ``-m`` flag is not allowed when using the ``cp -A`` flag. z� <B>CHECKSUM VALIDATION</B> gsutil automatically performs checksum validation for copies to and from Cloud Storage. For more information, see `Hashes and ETags <https://cloud.google.com/storage/docs/hashes-etags#cli>`_. a <B>RETRY HANDLING</B> The ``cp`` command retries when failures occur, but if enough failures happen during a particular copy or delete operation, or if a failure isn't retryable, the ``cp`` command skips that object and moves on. If any failures were not successfully retried by the end of the copy run, the ``cp`` command reports the number of failures and exits with a non-zero status. For details about gsutil's overall retry handling, see `Retry strategy <https://cloud.google.com/storage/docs/retry-strategy#tools>`_. a� <B>RESUMABLE TRANSFERS</B> gsutil automatically resumes interrupted downloads and interrupted `resumable uploads <https://cloud.google.com/storage/docs/resumable-uploads#gsutil>`_, except when performing streaming transfers. In the case of an interrupted download, a partially downloaded temporary file is visible in the destination directory with the suffix ``_.gstmp`` in its name. Upon completion, the original file is deleted and replaced with the downloaded contents. Resumable transfers store state information in files under ~/.gsutil, named by the destination object or file. See "gsutil help prod" for details on using resumable transfers in production. a� <B>STREAMING TRANSFERS</B> Use '-' in place of src_url or dst_url to perform a `streaming transfer <https://cloud.google.com/storage/docs/streaming>`_. Streaming uploads using the `JSON API <https://cloud.google.com/storage/docs/request-endpoints#gsutil>`_ are buffered in memory part-way back into the file and can thus sometimes resume in the event of network or service problems. gsutil does not support resuming streaming uploads using the XML API or resuming streaming downloads for either JSON or XML. If you have a large amount of data to transfer in these cases, we recommend that you write the data to a local file and copy that file rather than streaming it. a< <B>SLICED OBJECT DOWNLOADS</B> gsutil can automatically use ranged ``GET`` requests to perform downloads in parallel for large files being downloaded from Cloud Storage. See `sliced object download documentation <https://cloud.google.com/storage/docs/sliced-object-downloads>`_ for a complete discussion. a� <B>PARALLEL COMPOSITE UPLOADS</B> gsutil can automatically use `object composition <https://cloud.google.com/storage/docs/composite-objects>`_ to perform uploads in parallel for large, local files being uploaded to Cloud Storage. See the `parallel composite uploads documentation <https://cloud.google.com/storage/docs/parallel-composite-uploads>`_ for a complete discussion. am <B>CHANGING TEMP DIRECTORIES</B> gsutil writes data to a temporary directory in several cases: - when compressing data to be uploaded (see the ``-z`` and ``-Z`` options) - when decompressing data being downloaded (for example, when the data has ``Content-Encoding:gzip`` as a result of being uploaded using gsutil cp -z or gsutil cp -Z) - when running integration tests using the gsutil test command In these cases, it's possible the temporary file location on your system that gsutil selects by default may not have enough space. If gsutil runs out of space during one of these operations (for example, raising "CommandException: Inadequate temp space available to compress <your file>" during a ``gsutil cp -z`` operation), you can change where it writes these temp files by setting the TMPDIR environment variable. On Linux and macOS, you can set the variable as follows: TMPDIR=/some/directory gsutil cp ... You can also add this line to your ~/.bashrc file and restart the shell before running gsutil: export TMPDIR=/some/directory On Windows 7, you can change the TMPDIR environment variable from Start -> Computer -> System -> Advanced System Settings -> Environment Variables. You need to reboot after making this change for it to take effect. Rebooting is not necessary after running the export command on Linux and macOS. a <B>SYNCHRONIZING OVER OS-SPECIFIC FILE TYPES (SUCH AS SYMLINKS AND DEVICES)</B> Please see the section about OS-specific file types in "gsutil help rsync". While that section refers to the ``rsync`` command, analogous points apply to the ``cp`` command. aT2 <B>OPTIONS</B> -a predef_acl Applies the specific predefined ACL to uploaded objects. See "gsutil help acls" for further details. -A Copy all source versions from a source bucket or folder. If not set, only the live version of each source object is copied. NOTE: This option is only useful when the destination bucket has Object Versioning enabled. Additionally, the generation numbers of copied versions do not necessarily match the order of the original generation numbers. -c If an error occurs, continue attempting to copy the remaining files. If any copies are unsuccessful, gsutil's exit status is non-zero, even if this flag is set. This option is implicitly set when running ``gsutil -m cp...``. NOTE: ``-c`` only applies to the actual copying operation. If an error, such as ``invalid Unicode file name``, occurs while iterating over the files in the local directory, gsutil prints an error message and aborts. -D Copy in "daisy chain" mode, which means copying between two buckets by first downloading to the machine where gsutil is run, then uploading to the destination bucket. The default mode is a "copy in the cloud," where data is copied between two buckets without uploading or downloading. During a "copy in the cloud," a source composite object remains composite at its destination. However, you can use "daisy chain" mode to change a composite object into a non-composite object. For example: gsutil cp -D gs://bucket/obj gs://bucket/obj_tmp gsutil mv gs://bucket/obj_tmp gs://bucket/obj NOTE: "Daisy chain" mode is automatically used when copying between providers: for example, when copying data from Cloud Storage to another provider. -e Exclude symlinks. When specified, symbolic links are not copied. -I Use ``stdin`` to specify a list of files or objects to copy. You can use gsutil in a pipeline to upload or download objects as generated by a program. For example: cat filelist | gsutil -m cp -I gs://my-bucket where the output of ``cat filelist`` is a one-per-line list of files, cloud URLs, and wildcards of files and cloud URLs. -j <ext,...> Applies gzip transport encoding to any file upload whose extension matches the ``-j`` extension list. This is useful when uploading files with compressible content such as .js, .css, or .html files. This also saves network bandwidth while leaving the data uncompressed in Cloud Storage. When you specify the ``-j`` option, files being uploaded are compressed in-memory and on-the-wire only. Both the local files and Cloud Storage objects remain uncompressed. The uploaded objects retain the ``Content-Type`` and name of the original files. Note that if you want to use the ``-m`` `top-level option <https://cloud.google.com/storage/docs/gsutil/addlhelp/GlobalCommandLineOptions>`_ to parallelize copies along with the ``-j/-J`` options, your performance may be bottlenecked by the "max_upload_compression_buffer_size" boto config option, which is set to 2 GiB by default. You can change this compression buffer size to a higher limit. For example: gsutil -o "GSUtil:max_upload_compression_buffer_size=8G" \ -m cp -j html,txt -r /local/source/dir gs://bucket/path -J Applies gzip transport encoding to file uploads. This option works like the ``-j`` option described above, but it applies to all uploaded files, regardless of extension. CAUTION: If some of the source files don't compress well, such as binary data, using this option may result in longer uploads. -L <file> Outputs a manifest log file with detailed information about each item that was copied. This manifest contains the following information for each item: - Source path. - Destination path. - Source size. - Bytes transferred. - MD5 hash. - Transfer start time and date in UTC and ISO 8601 format. - Transfer completion time and date in UTC and ISO 8601 format. - Upload id, if a resumable upload was performed. - Final result of the attempted transfer, either success or failure. - Failure details, if any. If the log file already exists, gsutil uses the file as an input to the copy process, and appends log items to the existing file. Objects that are marked in the existing log file as having been successfully copied or skipped are ignored. Objects without entries are copied and ones previously marked as unsuccessful are retried. This option can be used in conjunction with the ``-c`` option to build a script that copies a large number of objects reliably, using a bash script like the following: until gsutil cp -c -L cp.log -r ./dir gs://bucket; do sleep 1 done The -c option enables copying to continue after failures occur, and the -L option allows gsutil to pick up where it left off without duplicating work. The loop continues running as long as gsutil exits with a non-zero status. A non-zero status indicates there was at least one failure during the copy operation. NOTE: If you are synchronizing the contents of a directory and a bucket, or the contents of two buckets, see "gsutil help rsync". -n No-clobber. When specified, existing files or objects at the destination are not replaced. Any items that are skipped by this option are reported as skipped. gsutil performs an additional GET request to check if an item exists before attempting to upload the data. This saves gsutil from retransmitting data, but the additional HTTP requests may make small object transfers slower and more expensive. -p Preserves ACLs when copying in the cloud. Note that this option has performance and cost implications only when using the XML API, as the XML API requires separate HTTP calls for interacting with ACLs. You can mitigate this performance issue using ``gsutil -m cp`` to perform parallel copying. Note that this option only works if you have OWNER access to all objects that are copied. If you want all objects in the destination bucket to end up with the same ACL, you can avoid these performance issues by setting a default object ACL on that bucket instead of using ``cp -p``. See "gsutil help defacl". Note that it's not valid to specify both the ``-a`` and ``-p`` options together. -P Enables POSIX attributes to be preserved when objects are copied. ``gsutil cp`` copies fields provided by ``stat``. These fields are the user ID of the owner, the group ID of the owning group, the mode or permissions of the file, and the access and modification time of the file. For downloads, these attributes are only set if the source objects were uploaded with this flag enabled. On Windows, this flag only sets and restores access time and modification time. This is because Windows doesn't support POSIX uid/gid/mode. -R, -r The ``-R`` and ``-r`` options are synonymous. They enable directories, buckets, and bucket subdirectories to be copied recursively. If you don't use this option for an upload, gsutil copies objects it finds and skips directories. Similarly, if you don't specify this option for a download, gsutil copies objects at the current bucket directory level and skips subdirectories. -s <class> Specifies the storage class of the destination object. If not specified, the default storage class of the destination bucket is used. This option is not valid for copying to non-cloud destinations. -U Skips objects with unsupported object types instead of failing. Unsupported object types include Amazon S3 objects in the GLACIER storage class. -v Prints the version-specific URL for each uploaded object. You can use these URLs to safely make concurrent upload requests, because Cloud Storage refuses to perform an update if the current object version doesn't match the version-specific URL. See `generation numbers <https://cloud.google.com/storage/docs/metadata#generation-number>`_ for more details. -z <ext,...> Applies gzip content-encoding to any file upload whose extension matches the ``-z`` extension list. This is useful when uploading files with compressible content such as .js, .css, or .html files, because it reduces network bandwidth and storage sizes. This can both improve performance and reduce costs. When you specify the ``-z`` option, the data from your files is compressed before it is uploaded, but your actual files are left uncompressed on the local disk. The uploaded objects retain the ``Content-Type`` and name of the original files, but have their ``Content-Encoding`` metadata set to ``gzip`` to indicate that the object data stored are compressed on the Cloud Storage servers and have their ``Cache-Control`` metadata set to ``no-transform``. For example, the following command: gsutil cp -z html \ cattypes.html tabby.jpeg gs://mycats does the following: - The ``cp`` command uploads the files ``cattypes.html`` and ``tabby.jpeg`` to the bucket ``gs://mycats``. - Based on the file extensions, gsutil sets the ``Content-Type`` of ``cattypes.html`` to ``text/html`` and ``tabby.jpeg`` to ``image/jpeg``. - The ``-z`` option compresses the data in the file ``cattypes.html``. - The ``-z`` option also sets the ``Content-Encoding`` for ``cattypes.html`` to ``gzip`` and the ``Cache-Control`` for ``cattypes.html`` to ``no-transform``. Because the ``-z/-Z`` options compress data prior to upload, they are not subject to the same compression buffer bottleneck that can affect the ``-j/-J`` options. Note that if you download an object with ``Content-Encoding:gzip``, gsutil decompresses the content before writing the local file. -Z Applies gzip content-encoding to file uploads. This option works like the ``-z`` option described above, but it applies to all uploaded files, regardless of extension. CAUTION: If some of the source files don't compress well, such as binary data, using this option may result in files taking up more space in the cloud than they would if left uncompressed. --stet If the STET binary can be found in boto or PATH, cp will use the split-trust encryption tool for end-to-end encryption. z za:AcDeIL:MNnpPrRs:tUvz:Zj:Jz--all-versionsz--predefined-aclz--continue-on-errorz --daisy-chainz--ignore-symlinksz--read-paths-from-stdinz--gzip-in-flight-allz--gzip-in-flightz--manifest-pathz--no-clobberz--preserve-posixz--preserve-aclz--storage-classz--print-created-messagez--gzip-local-allz--gzip-local)�-A�-a�-c�-D�-e�-I�-J�-j�-L�-n�-P�-p�-s�-v�-Z�-zc C s i | ]\}}||�qS � rA )�.0�k�vrA rA � /opt/gsutil/gslib/commands/cp.py� <dictcomp>� s rF �-r�-Rc C s` d}t | �D ]\}\}}|dkr|} q*q|dur\| | d }| | d tj�||�f| |<