base-command (bc)

Base Command Commands

usage: ngc base-command [--debug] [--format_type <fmt>] [-h] {quickstart} ...

Named Arguments

--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

base-command

{quickstart}

Possible choices: ace, datamover, dataset, dm, job, qs, quickstart, resource, result, workspace

Sub-commands

ace

ACE Commands

ngc base-command ace [--debug] [--format_type <fmt>] [-h]  ...

Named Arguments

--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

ace

Possible choices: info, list, usage

Sub-commands

list

List each ACE accessible with the current configuration.

ngc base-command ace list [--column <column>] [--debug] [--format_type <fmt>]
                          [-h]
Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--column

Specify output column as column[=header], header is optional, default is name[=ACE]. Valid columns are id[=Id], description[=Description], instances[=Instances]. Use quotes with spaces. Multiple column arguments are allowed.

info

Get ACE details for the given ACE name.

ngc base-command ace info [--debug] [--format_type <fmt>] [-h] <ace name>
Positional Arguments
<ace name>

ACE Name

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

usage

[DEPRECATED] Get resource usage information about an ACE.

ngc base-command ace usage [--debug] [--format_type <fmt>]
                           [--only-unavailable]
                           [--resource-type {CPU,GPU,MIG}] [-h]
                           <ace name>
Positional Arguments
<ace name>

ACE Name

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--only-unavailable

Only show items that have unavailable resources.

--resource-type

Possible choices: CPU, GPU, MIG

Only show items of this resource type.

job

Job Commands

ngc base-command job [--debug] [--format_type <fmt>] [-h]  ...

Named Arguments

--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

job

Possible choices: attach, exec, get-json, info, kill, list, log, preempt, resume, run, telemetry, update

Sub-commands

list

List all jobs belonging to the user in the last week, filtered by configured ACE and team name. You can also specify the time range using up to two of the options --begin-time, --end-time, and --duration. Acceptable combinations include: --begin-time <t> --end-time <t> (time range is between begin-time and end-time), --begin-time <t> --duration <t> (time range is for a period specified by duration after begin-time), --end-time <t> --duration <t> (time range is for a period specified by duration up to end-time), --end-time <t> (time range is a period of 7 days before end-time), --begin-time <t> (time range is between begin-time and now), --duration <t> (time range is the specified amount of time before now)

ngc base-command job list [--all] [--begin-time <t>] [--column <column>]
                          [--debug] [--duration <t>] [--end-time <t>]
                          [--exclude-label <label>] [--format_type <fmt>]
                          [--interval <num>] [--label <label>] [--long]
                          [--priority <priority>] [--refresh] [--status <s>]
                          [-h]
Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--all

(For administrators only) Show all jobs across all users.

--long

Include additional information in the job status table.

--duration

Specifies the duration of time, either after begin-time or before end-time, for listing jobs created. Format: [nD][nH][nM][nS]. Default: 7 days

--end-time

Specifies the period end time for listing jobs created. Format: [yyyy-MM-dd::HH:mm:ss]. Default: now

--begin-time

Specifies the start time for listing jobs created. Format: [yyyy-MM-dd::HH:mm:ss].

--refresh

Enables refreshing of list.

--interval

Refresh interval in seconds. Allowed range [5-300] Default: 5

--status

Possible choices: CANCELED, CREATED, FAILED, FAILED_RUN_LIMIT_EXCEEDED, FINISHED_SUCCESS, IM_INTERNAL_ERROR, INFINITY_POOL_MISSING, KILLED_BY_ADMIN, KILLED_BY_SYSTEM, KILLED_BY_USER, PENDING_STORAGE_CREATION, PENDING_TERMINATION, PREEMPTED, PREEMPTED_BY_ADMIN, QUEUED, REQUESTING_RESOURCE, RESOURCE_CONSUMPTION_REQUEST_IN_PROGRESS, RESOURCE_GRANTED, RESOURCE_GRANT_DENIED, RESOURCE_LIMIT_EXCEEDED, RESOURCE_RELEASED, RUNNING, STARTING, TASK_LOST, UNKNOWN

Filter jobs listed according to input status. Options: ['CANCELED', 'CREATED', 'FAILED', 'FAILED_RUN_LIMIT_EXCEEDED', 'FINISHED_SUCCESS', 'IM_INTERNAL_ERROR', 'INFINITY_POOL_MISSING', 'KILLED_BY_ADMIN', 'KILLED_BY_SYSTEM', 'KILLED_BY_USER', 'PENDING_STORAGE_CREATION', 'PENDING_TERMINATION', 'PREEMPTED', 'PREEMPTED_BY_ADMIN', 'QUEUED', 'REQUESTING_RESOURCE', 'RESOURCE_CONSUMPTION_REQUEST_IN_PROGRESS', 'RESOURCE_GRANTED', 'RESOURCE_GRANT_DENIED', 'RESOURCE_LIMIT_EXCEEDED', 'RESOURCE_RELEASED', 'RUNNING', 'STARTING', 'TASK_LOST', 'UNKNOWN']

--column

Specify output column as column[=header], header is optional, default is id[=Id]. Valid columns are replicas[=Replicas], name[=Name], submitted[="Submitted By"], status[=Status], details[="Status Details"], type[="Status Type"], created[=Created], started[=Started], ended[=Ended], termination[=Termination], ace[=Ace], team[=Team], org[=Org], instance[="Instance Name"], duration[=Duration], reason[="Termination Reason"], labels[=Labels], locked[="Labels Locked"], order[=Order], priority[=Priority]. Use quotes with spaces. Multiple column arguments are allowed.

--label

Filter listed jobs by the label. Multiple label arguments are allowed, support standard Unix shell-style wildcards like '*' and '?'.

--exclude-label

Exclude listed jobs by the label. Multiple exclude label arguments are allowed, support standard Unix shell-style wildcards like '*' and '?'. Filters jobs without labels.

--priority

Possible choices: HIGH, LOW, NORMAL

Filter jobs listed according to priority. Options: ['HIGH', 'LOW', 'NORMAL']

preempt

Preempt the job. This begins a graceful shutdown of a job. Once preempted, the job will remain in a PREEMPTED state until acted on to resume or kill the job.

ngc base-command job preempt [--debug] [--format_type <fmt>] [-h] <job id>
Positional Arguments
<job id>

Job ID

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

resume

Resume the preempted job. This takes the job in PREEMPTED state and resubmits the job in the queue.

ngc base-command job resume [--debug] [--format_type <fmt>] [-h] <job id>
Positional Arguments
<job id>

Job ID

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

run

Submit a new job. ACE must be set to run this command.

ngc base-command job run [--array-type <type>] [--clone <jobid>]
                         [--coscheduling] [--datasetid <id>] [--debug]
                         [--entrypoint <entry>] [--env-var <key:value>]
                         [--experiment-flow-type <type>]
                         [--experiment-name <type>]
                         [--experiment-project-name <type>]
                         [--format_type <fmt>] [--label <label>]
                         [--lock-label] [--min-availability <num>]
                         [--min-timeslice <t>] [--network <type>]
                         [--order <order>] [--preempt <class>]
                         [--priority <priority>] [--replicas <num>]
                         [--result <mntpt>]
                         [--secret <secret[:key_name:alias_name]>]
                         [--shell [CMD]] [--start-deadline <t>]
                         [--topology-constraint <specifier>]
                         [--total-runtime <t>] [--use-image-entrypoint]
                         [--waitend] [--waitrun] [-c <c>] [-d <desc>]
                         [-f <file>] [-h] [-i <url>] [-in <type>] [-n <name>]
                         [-p <port>] [-w <wkspce>]
Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

-n, --name

Set a job name.

-i, --image

Set a docker image URL.

-f, --file

Submit a new job using a JSON File (other arguments will override corresponding json values).

-c, --commandline

Provide a command for the job to run.

--entrypoint

Overwrite the default ENTRYPOINT set by the image.

--use-image-entrypoint

Use the ENTRYPOINT defined in the image manifest

-d, --description

Set a job description.

--datasetid

Specify a dataset ID with container mountpoint or no-mount for a object based dataset(format: dataset-id:mountPoint or dataset-id:no-mount) to be bound to the job. This can be supplied multiple times. If no-mount is provided, the job will try to the fetch the dataset through storage specific protocol(e.g. s3 protocol).

-in, --instance

Instance type.

--replicas

Specifies the number of child replicas created for multi-node parallel job.

--array-type

Possible choices: HOROVOD, MPI, PARALLEL, PYTORCH

Specifies the type of Job. Choices: MPI, PARALLEL, PYTORCH, HOROVOD.

--coscheduling

Specifies if coscheduling would be allowed for the multi-node parallel job submission.

--min-availability

Minimum replicas that need to be scheduled to start a multi-node job. --min-availability is not allowed with --coscheduling.

--network

Possible choices: ETHERNET, INFINIBAND

Specify the information pertaining to network or switch. Choices: INFINIBAND, ETHERNET. Default: ETHERNET

--topology-constraint

Possible choices: any, pack

Specifies a topology constraint for the job. Only available for multi-node jobs. Choices: pack, any.

-p, --port

Set ports to open on the docker container. Ports on the host do not need to be specified. Allowed range for containerPort is [1-1728][1730-65535]. Multiple port arguments are allowed. ACEs that allow exposed port supports the format name:containerPort/protocol. Choices for the protocol are: TCP, UDP, HTTPS, GRPC. Name must contain only alphanumeric characters, start with an alphabet and be no more than 10 chars. HTTPS and GRPC protocols do not support name. HTTPS is applied if the protocol is not specfied.

--result

Mount point for the job result.

--preempt

Possible choices: RESTARTABLE, RESUMABLE, RUNONCE

Specify the job class for preemption and scheduling behavior. One of RESUMABLE, RESTARTABLE, or RUNONCE (default for non-shell jobs).

--total-runtime

Maximum cumulative duration (in the format [nD][nH][nM][nS]) the job is in the RUNNING state before it gets gracefully shut down by the system.

--min-timeslice

Minimum duration (in the format [nD][nH][nM][nS]) the job is expected (not guaranteed) to be in the RUNNING state once scheduled to assure forward progress.

--waitrun

The CLI will block until the job reaches a RUNNING status or user exits with Ctrl-C.

--waitend

The CLI will block until the job reaches a terminal status or user exits with Ctrl-C.

--start-deadline

Maximum duration (in the format [nD][nH][nM][nS]) the job will have to reach a RUNNING status before it is automatically killed. May only be used with --shell. Default: 6m

--shell

Automatically exec into the running container once the job starts with an optionally supplied command (defaults to /bin/bash). If --commandline is not supplied with this option, 'sleep' will be used to hold the container open. --total-runtime controls the duration of the sleep command and defaults to 8H.

-w, --workspace

Specify the workspace to be bound to the job. (format: <workspace-id|workspace-name>:<mountpoint>:<mount-mode>). <mount-mode> can take values RW (read-write), RO (read-only) (default: RW). Multiple workspace arguments are allowed.

--clone

Submit a new job by cloning an existing job (other arguments will override corresponding values).

--label

Specify labels for the job. Multiple label arguments are allowed. Labels must start with alphabetic characters or '_' and valid characters are alphanumeric and '_'. It must be no more than 256 characters. Reserved labels start with '_'. System labels start with '__' and only admins can assign or remove system labels. A maximum of 20 user, reserved or system labels are allowed.

--lock-label

Lock labels for the job. Default is unlocked.

--order

Specify order for the job. Default is 50. Job order is from 1 to 99.

--priority

Possible choices: HIGH, LOW, NORMAL

Specify priority for the job. Default is NORMAL. Choices ['HIGH', 'LOW', 'NORMAL']

--secret

Specify secret name for the job. Multiple secret arguments are allowed. Unless specified all key value pairs will be included in the job. Optionally per key-value pair, overrides of the key are available.

--env-var

A custom env variable to add to job in the form of a key-pairA key name must be between 1-63 characters and contain letters, numbers or ./-_ May be used multiple times in the same command.

--experiment-flow-type

Possible choices: NONE, WANDB

Third-party experiment flow type which will be used to export specific environment variables to associate the project/experiment name to the job and decide the format of the experimentation tracking URLSpecifies the type of Job. Choices: NONE, WANDB.

--experiment-project-name

Third-party project/environment name to associate the current job/run

--experiment-name

Optional Third-party experiment name to group the jobs/runs

info

Get job details by job ID.

ngc base-command job info [--debug] [--format_type <fmt>] [-h]
                          <job_id>[:replica_id]>
Positional Arguments
<job_id>[:replica_id]>

Job ID

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

attach

Attach to the container running the provided job.

ngc base-command job attach [--debug] [--format_type <fmt>] [-h]
                            <job_id>[:replica_id]>
Positional Arguments
<job_id>[:replica_id]>

Job ID

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

exec

Exec to the container running the provided job.

ngc base-command job exec [--commandline] [--debug] [--detach]
                          [--format_type <fmt>] [-h]
                          <job_id>[:replica_id]>
Positional Arguments
<job_id>[:replica_id]>

Job ID

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--commandline

The command you want to execute with exec.

--detach

Detach from the exec command.

kill

Submit a request to kill jobs by job ID.

ngc base-command job kill [--debug] [--dry-run] [--format_type <fmt>]
                          [--reason <reason>] [--status <s>] [-h]
                          <jobid|jobrange|joblist>
Positional Arguments
<jobid|jobrange|joblist>

Job ID(s). Valid Examples: '1-5', '333', '1, 2', '1,10-15'

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--status

Possible choices: CREATED, PENDING_STORAGE_CREATION, PENDING_TERMINATION, PREEMPTED, PREEMPTED_BY_ADMIN, QUEUED, REQUESTING_RESOURCE, RESOURCE_CONSUMPTION_REQUEST_IN_PROGRESS, RESOURCE_GRANTED, RUNNING, STARTING

Kill jobs that match the provided status. Multiple --status flags will OR together. Options: ['CREATED', 'PENDING_STORAGE_CREATION', 'PENDING_TERMINATION', 'PREEMPTED', 'PREEMPTED_BY_ADMIN', 'QUEUED', 'REQUESTING_RESOURCE', 'RESOURCE_CONSUMPTION_REQUEST_IN_PROGRESS', 'RESOURCE_GRANTED', 'RUNNING', 'STARTING']

--dry-run

List jobs to be killed without performing the action.

--reason

Reason to terminate the job (required for administrators).

telemetry

List telemetry data for the given job.

ngc base-command job telemetry [--debug] [--format_type <fmt>]
                               [--interval-time <t>] [--interval-unit <u>]
                               [--statistics <form>] [--type <type>] [-h]
                               <<job id>[:replica_id]>
Positional Arguments
<<job id>[:replica_id]>

Job ID

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--interval-unit

Possible choices: HOUR, MINUTE, SECOND

Data collection interval unit. Options: ['HOUR', 'MINUTE', 'SECOND']. Default: MINUTE

--interval-time

Data collection interval time value. Default: 1

--type

Possible choices: APPLICATION_TELEMETRY, CPU_UTILIZATION, GPU_FB_USED, GPU_FI_PROF_DRAM_ACTIVE, GPU_FI_PROF_PCIE_RX_BYTES, GPU_FI_PROF_PCIE_TX_BYTES, GPU_FI_PROF_PIPE_TENSOR_ACTIVE, GPU_NVLINK_BANDWIDTH_TOTAL, GPU_NVLINK_RX_BYTES, GPU_NVLINK_TX_BYTES, GPU_POWER_USAGE, GPU_UTILIZATION, MEM_UTILIZATION, NETWORK_LOCAL_STORAGE_RAID_SIZE_BYTES, NETWORK_LOCAL_STORAGE_RAID_TOTAL_READ_BYTES, NETWORK_LOCAL_STORAGE_RAID_TOTAL_WRITE_BYTES, NETWORK_LOCAL_STORAGE_RAID_USAGE_BYTES, NETWORK_LOCAL_STORAGE_ROOT_SIZE_BYTES, NETWORK_LOCAL_STORAGE_ROOT_TOTAL_READ_BYTES, NETWORK_LOCAL_STORAGE_ROOT_TOTAL_WRITE_BYTES, NETWORK_LOCAL_STORAGE_ROOT_USAGE_BYTES, NETWORK_RDMA_PORT_RATE_RX_BYTES, NETWORK_RDMA_PORT_RATE_TX_BYTES, NETWORK_RDMA_PORT_RX_BYTES, NETWORK_RDMA_PORT_TX_BYTES, NETWORK_RX_BYTES_TOTAL, NETWORK_STORAGE_TOTAL_READ_BYTES, NETWORK_STORAGE_TOTAL_WRITE_BYTES, NETWORK_STORAGE_TRANSPORT_RX_BYTES, NETWORK_STORAGE_TRANSPORT_TX_BYTES, NETWORK_TX_BYTES_TOTAL

A telemetry type to report. Options: ['APPLICATION_TELEMETRY', 'CPU_UTILIZATION', 'GPU_FB_USED', 'GPU_FI_PROF_DRAM_ACTIVE', 'GPU_FI_PROF_PCIE_RX_BYTES', 'GPU_FI_PROF_PCIE_TX_BYTES', 'GPU_FI_PROF_PIPE_TENSOR_ACTIVE', 'GPU_NVLINK_BANDWIDTH_TOTAL', 'GPU_NVLINK_RX_BYTES', 'GPU_NVLINK_TX_BYTES', 'GPU_POWER_USAGE', 'GPU_UTILIZATION', 'MEM_UTILIZATION', 'NETWORK_LOCAL_STORAGE_RAID_SIZE_BYTES', 'NETWORK_LOCAL_STORAGE_RAID_TOTAL_READ_BYTES', 'NETWORK_LOCAL_STORAGE_RAID_TOTAL_WRITE_BYTES', 'NETWORK_LOCAL_STORAGE_RAID_USAGE_BYTES', 'NETWORK_LOCAL_STORAGE_ROOT_SIZE_BYTES', 'NETWORK_LOCAL_STORAGE_ROOT_TOTAL_READ_BYTES', 'NETWORK_LOCAL_STORAGE_ROOT_TOTAL_WRITE_BYTES', 'NETWORK_LOCAL_STORAGE_ROOT_USAGE_BYTES', 'NETWORK_RDMA_PORT_RATE_RX_BYTES', 'NETWORK_RDMA_PORT_RATE_TX_BYTES', 'NETWORK_RDMA_PORT_RX_BYTES', 'NETWORK_RDMA_PORT_TX_BYTES', 'NETWORK_RX_BYTES_TOTAL', 'NETWORK_STORAGE_TOTAL_READ_BYTES', 'NETWORK_STORAGE_TOTAL_WRITE_BYTES', 'NETWORK_STORAGE_TRANSPORT_RX_BYTES', 'NETWORK_STORAGE_TRANSPORT_TX_BYTES', 'NETWORK_TX_BYTES_TOTAL']. Default: None

--statistics

Possible choices: MAX, MEAN, MIN

Statistical form of the data to report. Options: ['MAX', 'MEAN', 'MIN']. Default: MEAN

get-json

Generate a json file containing the given job. This can be used to submit that job.

ngc base-command job get-json [--debug] [--format_type <fmt>] [-h] <job id>
Positional Arguments
<job id>

Job ID

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

update

Update a job's labels.

ngc base-command job update [--clear-label] [--debug] [--format_type <fmt>]
                            [--label <label>] [--lock-label]
                            [--remove-label REMOVE_LABEL] [--unlock-label]
                            [-h]
                            <job id>
Positional Arguments
<job id>

Job ID

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--label

Specify labels for the job. Multiple label arguments are allowed. Labels must start with alphabetic characters or '_' and valid characters are alphanumeric and '_'. It must be no more than 256 characters. Reserved labels start with '_'. System labels start with '__' and only admins can assign or remove system labels. A maximum of 20 user, reserved or system labels are allowed.

--remove-label

Remove a label. Multiple remove label arguments are allowed, support standard Unix shell-style wildcards like '*' and '?'.

--clear-label

Remove all labels for the job.

--lock-label

Lock Labels.

--unlock-label

Unlock Labels.

log

Print a job's log.

ngc base-command job log [--debug] [--format_type <fmt>] [--head]
                         [--lines <lines>] [--tail] [-h]
                         <job_id>[:replica_id]>
Positional Arguments
<job_id>[:replica_id]>

Job ID

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--head

Print the first part of the log file. Default is 10 lines.

--lines

Specify the number of lines to print. Must specify --head or --tail.

--tail

Print the last part of the log file. Default is 10 lines.

dataset

Data Commands

ngc base-command dataset [--debug] [--format_type <fmt>] [-h]  ...

Named Arguments

--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

dataset

Possible choices: convert, download, export, import, info, list, remove, revoke-share, share, upload

Sub-commands

export

Dataset Export Commands

ngc base-command dataset export [--debug] [--format_type <fmt>] [-h]  ...
Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

export

Possible choices: info, list, run

Sub-commands
list

List all dataset export jobs.

ngc base-command dataset export list [--column <column>] [--debug]
                                     [--format_type <fmt>] [-h]
Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--column

Specify output column as column[=header], header is optional, default is id[=Id]. Valid columns are id[=Id], source[=Source], destination[=Destination], status[=Status], start_time[="Start time"], end_time[="End time"]. Use quotes with spaces. Multiple column arguments are allowed.

run

Export a dataset from the ACE into an object store.

ngc base-command dataset export run [--account-name <account-name>]
                                    [--bucket <bucket>]
                                    [--container <container>] [--debug]
                                    [--endpoint <endpoint>]
                                    [--format_type <fmt>]
                                    [--instance <instance>]
                                    [--prefix <prefix>] --protocol <protocol>
                                    [--region <region>] --secret <secret>
                                    [--service-url <service-url>] [-h]
                                    <dataset>
Positional Arguments
<dataset>

Dataset ID to be exported.

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--endpoint

S3 endpoint. Only applies when --protocol is s3.

--bucket

S3 bucket name. Only applies when --protocol is s3.

--region

S3 region (optional). Default: us-east-1. Only applies when --protocol is s3.

--account-name

Azure Blob account name. Only applies when --protocol is azureblob.

--container

Azure Blob container name. Only applies when --protocol is azureblob.

--service-url

Azure Blob service url (optional). Only applies when --protocol is azureblob.

--prefix

Object prefix. Enables copying a subset of all objects in a location

--instance

Instance to use for the data export.

Required named arguments
--protocol

Possible choices: azureblob, s3, url

Access protocol for the destination. Options: s3, url, azureblob.

--secret

NGC Secret object to use.

info

Status of the dataset export job.

ngc base-command dataset export info [--debug] [--format_type <fmt>] [-h]
                                     <job_id>
Positional Arguments
<job_id>

Job ID.

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

import

Dataset Import Commands

ngc base-command dataset import [--debug] [--format_type <fmt>] [-h]  ...
Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

import

Possible choices: finish, info, list, start

Sub-commands
list

List all dataset import jobs

ngc base-command dataset import list [--column <column>] [--debug]
                                     [--format_type <fmt>] [-h]
Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--column

Specify output column as column[=header], header is optional, default is id[=Id]. Valid columns are id[=Id], source[=Source], destination[=Destination], status[=Status], start_time[="Start time"], end_time[="End time"]. Use quotes with spaces. Multiple column arguments are allowed.

start

Start import of a dataset from an object store into the ACE.

ngc base-command dataset import start [--account-name <account-name>]
                                      [--bucket <bucket>]
                                      [--container <container>] [--debug]
                                      [--endpoint <endpoint>]
                                      [--format_type <fmt>] --instance
                                      <instance> [--prefix <prefix>]
                                      --protocol <protocol>
                                      [--region <region>] --secret <secret>
                                      [--service-url <service-url>] [-h]
Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--endpoint

S3 endpoint. Only applies when --protocol is s3.

--bucket

S3 bucket name. Only applies when --protocol is s3.

--prefix

Object prefix. Enables copying a subset of all objects in a location

--region

S3 region (optional). Default: us-east-1. Only applies when --protocol is s3.

--account-name

Azure Blob account name. Only applies when --protocol is azureblob.

--container

Azure Blob container name. Only applies when --protocol is azureblob.

--service-url

Azure Blob service url (optional). Only applies when --protocol is azureblob.

Required named arguments
--protocol

Possible choices: azureblob, s3, url

Access protocol for the source. Options: s3, url, azureblob.

--secret

NGC Secret object to use.

--instance

Instance to use for the data import.

info

Status of the import job.

ngc base-command dataset import info [--debug] [--format_type <fmt>] [-h]
                                     <job_id>
Positional Arguments
<job_id>

Job ID.

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

finish

Finish a dataset import.

ngc base-command dataset import finish [--debug] [--desc <desc>]
                                       [--format_type <fmt>]
                                       [--from-dataset <id>] [--name <name>]
                                       [-h]
                                       <job_id>
Positional Arguments
<job_id>

Job ID.

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--from-dataset

Dataset to import metadata from. Used during ACE to ACE copies, typically after having run a dataset export. Imported metadata: name, description, sharing information.

--name

Dataset Name. If used together with --from-dataset, --name takes precedence.

--desc

Dataset Description. If used together with --from-dataset, --desc takes precedence.

list

List all accessible datasets. Set ACE and team will be used to filter list.

ngc base-command dataset list [--all] [--column <column>] [--debug]
                              [--format_type <fmt>] [--name <name>] [--owned]
                              [--prepopulated] [--status <status>] [-h]
Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--owned

Include only owned datasets.

--prepopulated

Include only pre-populated datasets.

--name

Include only datasets with name <name>, wildcards '*' and '?' are allowed.

--status

Possible choices: COMPLETED, CONVERTED, CREATED, DELETED, INITIALIZING

Include only datasets with status <status>. Options: ['COMPLETED', 'CONVERTED', 'CREATED', 'DELETED', 'INITIALIZING'].

--all

(For administrators only) Show all datasets across all users.

--column

Specify output column as column[=header], header is optional, default is uid[=Id]. Valid columns are id[="Integer Id"], name[=Name], org[=Org], team[=Team], modified[="Modified Date"], created[="Created Date"], creator[="Creator UserName"], description[=Description], shared[=Shared], owned[=Owned], ace[=Ace], status[=Status], size[=Size], prepop[=Pre-pop]. Use quotes with spaces. Multiple column arguments are allowed.

info

Retrieve details of a dataset given dataset ID

ngc base-command dataset info [--debug] [--files] [--format_type <fmt>] [-h]
                              <dataset id>
Positional Arguments
<dataset id>

Dataset ID

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--files

List files in addition to details for a dataset. Default value is False

share

Share a dataset with an org or team. If team is set it will be shared with that team.

ngc base-command dataset share [--debug] [--format_type <fmt>] [-h] [-y]
                               <dataset id>
Positional Arguments
<dataset id>

Dataset ID

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

-y, --yes

Automatically say yes to all interactive questions.

revoke-share

Revoke dataset sharing with an org or team.

ngc base-command dataset revoke-share [--debug] [--format_type <fmt>] [-h]
                                      [-y]
                                      <dataset id>
Positional Arguments
<dataset id>

Dataset ID

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

-y, --yes

Automatically say yes to all interactive questions.

upload

Upload a dataset to a given ACE. Dataset will be local to the set ACE.

ngc base-command dataset upload [--debug] [--desc <desc>] [--dry-run]
                                [--format_type <fmt>] [--omit-links]
                                [--share [<team>]] [--source <path>]
                                [--threads <t>] [-h] [-y]
                                <dataset>
Positional Arguments
<dataset>

Dataset Name or ID

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--desc

Dataset Description

--source

Path to the file(s) to be uploaded. Default: .

--threads

Number of threads to be used while uploading the dataset (default: 12, max: 21)

-y, --yes

Automatically say yes to all interactive questions.

--omit-links

Do not follow symbolic links.

--dry-run

List file paths, total upload size and file count without performing the upload.

--share

Share the dataset with a team after upload. Can be used multiple times. If no team is specified, the currently set team will be used.

download

Download datasets by ID.

ngc base-command dataset download [--debug] [--dest <path>] [--dir <wildcard>]
                                  [--dry-run] [--exclude <wildcard>]
                                  [--file <wildcard>] [--format_type <fmt>]
                                  [--resume <resume>] [--zip] [-h]
                                  <dataset id>
Positional Arguments
<dataset id>

Dataset ID

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--dest

Specify the path to store the downloaded files. Default: .

--file

Specify individual files to download from the dataset. Supports standard Unix shell-style wildcards like (?, [abc], [!a-z], etc..) May be used multiple times in the same command.

--resume

Resume the download for the dataset. Specify the file name saved by the download. Files will be downloaded to the directory of the file name.

--dir

Specify directories to download from dataset. Supports standard Unix shell-style wildcards like (?, [abc], [!a-z], etc..) May be used multiple times in the same command.

--zip

Download the entire dataset directory as a zip file.

--exclude

Exclude files or directories from the downloaded dataset. Supports standard Unix shell-style wildcards like (?, [abc], [!a-z], etc..). May be used multiple times in the same command.

--dry-run

List total size of the download without performing the download.

remove

Remove a dataset.

ngc base-command dataset remove [--debug] [--format_type <fmt>] [-h] [-y]
                                <datasetid|datasetrange|datasetlist>
Positional Arguments
<datasetid|datasetrange|datasetlist>

Dataset ID(s). Valid Examples: '1-5', '333', '1,2', '1,10-15'. Do not include any spaces between IDs. Dataset range is not supported while using Data Platform API.

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

-y, --yes

Automatically say yes to all interactive questions.

convert

Convert data from a variety of sources to a dataset in the set ACE.

ngc base-command dataset convert [--debug] [--desc <desc>]
                                 [--format_type <fmt>] --from-result <id> [-h]
                                 <name>
Positional Arguments
<name>

Dataset Name

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--desc

Provide a description for the dataset.

Required named arguments
--from-result

Job result to convert to a dataset. Must be in the same ACE as target. Result files are no longer available after conversion.

workspace

Workspace Commands

ngc base-command workspace [--debug] [--format_type <fmt>] [-h]  ...

Named Arguments

--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

workspace

Possible choices: create, download, export, import, info, list, mount, remove, revoke-share, set, share, unmount, upload

Sub-commands

export

Workspace Export Commands

ngc base-command workspace export [--debug] [--format_type <fmt>] [-h]  ...
Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

export

Possible choices: info, list, run

Sub-commands
list

List all workspace export jobs.

ngc base-command workspace export list [--column <column>] [--debug]
                                       [--format_type <fmt>] [-h]
Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--column

Specify output column as column[=header], header is optional, default is id[=Id]. Valid columns are id[=Id], source[=Source], destination[=Destination], status[=Status], start_time[="Start time"], end_time[="End time"]. Use quotes with spaces. Multiple column arguments are allowed.

run

Export a workspace from the ACE into an object store.

ngc base-command workspace export run [--account-name <account-name>]
                                      [--bucket <bucket>]
                                      [--container <container>] [--debug]
                                      [--endpoint <endpoint>]
                                      [--format_type <fmt>]
                                      [--instance <instance>]
                                      [--prefix <prefix>] --protocol
                                      <protocol> [--region <region>] --secret
                                      <secret> [--service-url <service-url>]
                                      [-h]
                                      <workspace>
Positional Arguments
<workspace>

Workspace ID to be exported.

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--endpoint

S3 endpoint. Only applies when --protocol is s3.

--bucket

S3 bucket name. Only applies when --protocol is s3.

--prefix

Object prefix. Enables copying a subset of all objects in a location

--region

S3 region (optional). Default: us-east-1. Only applies when --protocol is s3.

--account-name

Azure Blob account name. Only applies when --protocol is azureblob.

--container

Azure Blob container name. Only applies when --protocol is azureblob.

--service-url

Azure Blob service url (optional). Only applies when --protocol is azureblob.

--instance

Instance to use for the data export.

Required named arguments
--protocol

Possible choices: azureblob, s3, url

Access protocol for the destination. Options: s3, url, azureblob.

--secret

NGC Secret object to use.

info

Status of the workspace export job.

ngc base-command workspace export info [--debug] [--format_type <fmt>] [-h]
                                       <job_id>
Positional Arguments
<job_id>

Job ID.

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

import

Workspace Import Commands

ngc base-command workspace import [--debug] [--format_type <fmt>] [-h]  ...
Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

import

Possible choices: info, list, run

Sub-commands
list

List all workspace import jobs

ngc base-command workspace import list [--column <column>] [--debug]
                                       [--format_type <fmt>] [-h]
Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--column

Specify output column as column[=header], header is optional, default is id[=Id]. Valid columns are id[=Id], source[=Source], destination[=Destination], status[=Status], start_time[="Start time"], end_time[="End time"]. Use quotes with spaces. Multiple column arguments are allowed.

run

Import of a workspace from an object store into the ACE.

ngc base-command workspace import run [--account-name <account-name>]
                                      [--bucket <bucket>]
                                      [--container <container>] [--debug]
                                      [--desc <desc>] [--endpoint <endpoint>]
                                      [--format_type <fmt>]
                                      [--from-workspace <id>] --instance
                                      <instance> [--name <name>]
                                      [--prefix <prefix>] --protocol
                                      <protocol> [--region <region>] --secret
                                      <secret> [--service-url <service-url>]
                                      [-h]
Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--endpoint

S3 endpoint. Only applies when --protocol is s3.

--bucket

S3 bucket name. Only applies when --protocol is s3.

--prefix

Object prefix. Enables copying a subset of all objects in a location

--region

S3 region (optional). Default: us-east-1. Only applies when --protocol is s3.

--account-name

Azure Blob account name. Only applies when --protocol is azureblob.

--container

Azure Blob container name. Only applies when --protocol is azureblob.

--service-url

Azure Blob service url (optional). Only applies when --protocol is azureblob.

--from-workspace

Workspace to import metadata from. Used during ACE to ACE copies, typically after having run a workspace export. Imported metadata: name, description, sharing information.

--name

Workspace Name. If used together with --from-workspace, --name takes precedence.

--desc

Workspace Description. If used together with --from-workspace, --desc takes precedence.

Required named arguments
--protocol

Possible choices: azureblob, s3, url

Access protocol for the source. Options: s3, url, azureblob.

--secret

NGC Secret object to use.

--instance

Instance to use for the data import.

info

Status of the import job.

ngc base-command workspace import info [--debug] [--format_type <fmt>] [-h]
                                       <job_id>
Positional Arguments
<job_id>

Job ID.

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

create

Create a new workspace.

ngc base-command workspace create [--debug] [--format_type <fmt>]
                                  [--name <name>] [-h]
Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--name

Set the workspace name. This may only be done once.

remove

Remove a workspace.

ngc base-command workspace remove [--debug] [--format_type <fmt>] [-h] [-y]
                                  <workspace>
Positional Arguments
<workspace>

Workspace Name or ID

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

-y, --yes

Automatically say yes to all interactive questions.

mount

Mount an existing workspace.

ngc base-command workspace mount [--control-path] [--debug] [--force]
                                 [--format_type <fmt>] --mode <mode>
                                 [--remote-path <path>] [-h] [-y]
                                 <workspace> <local path>
Positional Arguments
<workspace>

Workspace Name or ID

<local path>

Local Mount Point Name

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--remote-path

Path on the remote server inside the workspace.

--force

Force mount; this will remount if there is a broken mount point.

-y, --yes

Automatically say yes to all interactive questions.

--control-path

Set contol path to none and enable control master.

Required named arguments
--mode

Possible choices: RO, RW

Mount mode. Valid values: RW, RO

unmount

Unmount a workspace.

ngc base-command workspace unmount [--debug] [--format_type <fmt>] [-h]
                                   <local path>
Positional Arguments
<local path>

Local Mount Point Name

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

info

Get workspace details.

ngc base-command workspace info [--debug] [--format_type <fmt>] [--show-sftp]
                                [-h]
                                <workspace>
Positional Arguments
<workspace>

Workspace Name or ID

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--show-sftp

Show hostname, port and token for sftp.

set

Set name and/or description for a workspace.

ngc base-command workspace set [--debug] [--desc <desc>] [--format_type <fmt>]
                               [-h] [-n <name>] [-y]
                               <workspaceid>
Positional Arguments
<workspaceid>

Workspace ID

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

-n, --name

Set a workspace name. This may be done once.

--desc

Set a workspace description. This may be done once.

-y, --yes

Automatically say yes to all interactive questions.

list

List all accessible workspaces. Current ACE and team will be used to filter the output.

ngc base-command workspace list [--all] [--column <column>] [--debug]
                                [--format_type <fmt>] [--name <name>]
                                [--owned] [-h]
Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--owned

Include only owned workspaces.

--name

Include only workspaces with name <name>; the wildcards '*' and '?' are allowed.

--all

(For administrators only) Show all workspaces across all users.

--column

Specify output column as column[=header], header is optional, default is id[=Id]. Valid columns are name[=Name], org[=Org], team[=Team], updated[="Updated Date"], created[="Created Date"], creator[="Creator UserName"], description[=Description], shared[=Shared], owned[=Owned], ace[=Ace], size[=Size]. Use quotes with spaces. Multiple column arguments are allowed.

share

Share a workspace with an org or team.

ngc base-command workspace share [--debug] [--format_type <fmt>] [-h] [-y]
                                 <workspace>
Positional Arguments
<workspace>

Workspace Name or ID

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

-y, --yes

Automatically say yes to all interactive questions.

revoke-share

Revoke workspace sharing with an org or team.

ngc base-command workspace revoke-share [--debug] [--format_type <fmt>] [-h]
                                        [-y]
                                        <workspace>
Positional Arguments
<workspace>

Workspace Name or ID

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

-y, --yes

Automatically say yes to all interactive questions.

upload

Upload files to a workspace.

ngc base-command workspace upload [--debug] [--destination <path>] [--dry-run]
                                  [--exclude <wildcard>] [--format_type <fmt>]
                                  [--source <path>] [--threads <t>] [-h]
                                  <workspace>
Positional Arguments
<workspace>

Workspace Name or ID

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--threads

Number of threads to be used while uploading the workspace (default: 12, max: 21)

--source

Provide the path to the file(s) to be uploaded. Default: .

--destination

Provide a target directory within the workspace for the upload. Default: /

--exclude

Exclude files or directories from the source path. Supports standard Unix shell-style wildcards like (?, [abc], [!a-z], etc..). May be used multiple times in the same command.

--dry-run

List file paths, total upload size and file count without performing the upload.

download

Download a workspace.

ngc base-command workspace download [--debug] [--dest <path>] [--dir <path>]
                                    [--dry-run] [--file <path>]
                                    [--format_type <fmt>] [--zip] [-h]
                                    <workspace>
Positional Arguments
<workspace>

Workspace Name or ID

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--dest

Specify the path to store the downloaded workspace. Default: .

--file

Specify file(s) to download. This flag can be used multiple times. If omitted, the entire workspace directory will be downloaded.

--dir

Specify one or more directories to download. If omitted, the entire workspace directory will be downloaded.

--zip

Download the entire dataset directory as a zip file.

--dry-run

List total size of the download without performing the download.

datamover (dm)

Data Mover commands to assist copying resources to/from an object storage or to another ACE

ngc base-command datamover [--debug] [--format_type <fmt>] [-h]  ...

Named Arguments

--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

datamover

Possible choices: enqueue, list, update

Sub-commands

enqueue

Add copy job(s) to Data Mover queue.

ngc base-command datamover enqueue [--account-name <account-name>]
                                   [--all-from-org] [--all-from-team]
                                   [--bucket <bucket>]
                                   [--container <container>] [--debug]
                                   [--destination-ace <destination-ace>]
                                   [--destination-instance <destination-instance>]
                                   [--endpoint <endpoint>]
                                   [--format_type <fmt>] [--id <id>]
                                   [--manifest <manifest>]
                                   [--origin-ace <origin-ace>]
                                   [--origin-instance <origin-instance>]
                                   [--prefix <prefix>] --protocol <protocol>
                                   [--region <region>] --resource-type
                                   <resource_type> --secret <secret>
                                   [--service-url <service-url>] [-h] [-y]
Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--endpoint

S3 endpoint. Only applies when --protocol is s3.

--bucket

S3 bucket name. Only applies when --protocol is s3.

--region

S3 region (optional). Default: us-east-1. Only applies when --protocol is s3.

--account-name

Azure Blob account name. Only applies when --protocol is azureblob.

--container

Azure Blob container name. Only applies when --protocol is azureblob.

--service-url

Azure Blob service url (optional). Only applies when --protocol is azureblob.

--prefix

Object prefix. Enables copying a subset of all objects in a location

--origin-instance

Instance to use for the data export.

--destination-instance

Instance to use for the data import.

--origin-ace

Origin ACE name

--destination-ace

Destination ACE name

--manifest

Copy all resources from provided manifest file

--id

Id of the single resource to be copied

--all-from-org

Copy all resources from organization

--all-from-team

Copy all resources from team

-y, --yes

Automatically say yes to all interactive questions.

Required named arguments
--protocol

Possible choices: azureblob, s3, url

Access protocol for the destination. Options: s3, url, azureblob.

--secret

NGC Secret object to use.

--resource-type

Possible choices: dataset, workspace

Type of resource to be copied. Options: dataset, workspace

update

Move available jobs to the next data movement stage.

ngc base-command datamover update [--debug] [--format_type <fmt>]
                                  [--interval <seconds>] [--loop] [-h]
Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--loop

Run the update command in an endless loop.

--interval

Interval in seconds for endless loop. Default: 10

list

List status of Data Mover jobs.

ngc base-command datamover list [--debug] [--format_type <fmt>] [-h]
Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

quickstart (qs)

QuickStart Commands

ngc base-command quickstart [--debug] [--format_type <fmt>] [-h]
                            {cluster,project} ...

Named Arguments

--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

quickstart

{cluster,project}

Possible choices: cluster, project

Sub-commands

cluster

QuickStart Cluster Commands

ngc base-command quickstart cluster [--debug] [--format_type <fmt>] [-h]
                                    {create,info,list,list-instance-types,remove,start,status,stop,update}
                                    ...
Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

cluster
{create,info,list,list-instance-types,remove,start,status,stop,update}

Possible choices: create, info, list, list-instance-types, remove, start, status, stop, update

Sub-commands
list-instance-types

Show a list of all available instance types

ngc base-command quickstart cluster list-instance-types --cluster-type
                                                        {dask,jupyterlab}
                                                        [--debug]
                                                        [--format_type <fmt>]
                                                        [--multinode] [-h]
Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--multinode

Show multinode instance types. Default=False

Required named arguments
--cluster-type

Possible choices: dask, jupyterlab

The type of cluster: choose from ['dask', 'jupyterlab'].

list

List clusters.

ngc base-command quickstart cluster list --cluster-type {dask,jupyterlab}
                                         [--column <column>] [--debug]
                                         [--format_type <fmt>] [--org-only]
                                         [--owned] [-h]
Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--column

Specify output column as column[=header], header is optional, default is name[=Name]. Valid columns are additionalInfo[="Additional Info"], id[=ID], name[=Name], org[=Org], status[=Status], team[=Team], type[=Type]. Use quotes with spaces. Multiple column arguments are allowed.

--org-only

Don't return clusters created at the team level

--owned

Only return clusters I own (admin only)

Required named arguments
--cluster-type

Possible choices: dask, jupyterlab

The type of cluster: choose from ['dask', 'jupyterlab'].

info

Show information about a cluster

ngc base-command quickstart cluster info [--debug] [--format_type <fmt>] [-h]
                                         cluster_id
Positional Arguments
cluster_id

The ID of the cluster

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

status

Show the status of a cluster

ngc base-command quickstart cluster status [--debug] [--format_type <fmt>]
                                           [-h]
                                           cluster_id
Positional Arguments
cluster_id

The ID of the cluster

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

remove

Shutdown and delete a cluster

ngc base-command quickstart cluster remove [--debug] [--format_type <fmt>]
                                           [-h]
                                           cluster_id
Positional Arguments
cluster_id

The ID of the cluster

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

stop

Stop a cluster

ngc base-command quickstart cluster stop [--debug] [--format_type <fmt>] [-h]
                                         cluster_id
Positional Arguments
cluster_id

The ID of the cluster

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

start

Start a cluster

ngc base-command quickstart cluster start [--debug] [--format_type <fmt>] [-h]
                                          cluster_id
Positional Arguments
cluster_id

The ID of the cluster

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

create

Create a new cluster

ngc base-command quickstart cluster create
                                           [--additional-open-ports ADDITIONAL_OPEN_PORTS]
                                           [--additional-port-mappings ADDITIONAL_PORT_MAPPINGS]
                                           --cluster-lifetime CLUSTER_LIFETIME
                                           --cluster-type {dask,jupyterlab}
                                           [--conda-packages CONDA_PACKAGES]
                                           --container-image CONTAINER_IMAGE
                                           --data-output-mount-point
                                           DATA_OUTPUT_MOUNT_POINT
                                           [--dataset-mount DATASET_MOUNT]
                                           [--debug]
                                           [--expiry-duration EXPIRY_DURATION]
                                           [--format_type <fmt>]
                                           [--job-order JOB_ORDER]
                                           [--job-priority JOB_PRIORITY]
                                           [--label LABEL] [--labels-locked]
                                           [--min-availability MIN_AVAILABILITY]
                                           [--min-time-slice MIN_TIME_SLICE]
                                           [--multi-node] [--name NAME]
                                           --nworkers NWORKERS
                                           [--options <key:value>]
                                           [--pip-packages PIP_PACKAGES]
                                           [--preempt-class PREEMPT_CLASS]
                                           [--scheduler-dashboard-address SCHEDULER_DASHBOARD_ADDRESS]
                                           [--scheduler-env-var SCHEDULER_ENV_VAR]
                                           --scheduler-instance-type
                                           SCHEDULER_INSTANCE_TYPE
                                           [--scheduler-port SCHEDULER_PORT]
                                           [--scheduler-reserved-gpus SCHEDULER_RESERVED_GPUS]
                                           [--scheduler-startup-script SCHEDULER_STARTUP_SCRIPT]
                                           [--system-packages SYSTEM_PACKAGES]
                                           [--topology-constraint {any,pack}]
                                           [--user-secret <secret[:key_name:alias_name]>]
                                           [--worker-dashboard-address WORKER_DASHBOARD_ADDRESS]
                                           [--worker-env-var WORKER_ENV_VAR]
                                           [--worker-instance-type WORKER_INSTANCE_TYPE]
                                           [--worker-reserved-gpus WORKER_RESERVED_GPUS]
                                           [--worker-startup-script WORKER_STARTUP_SCRIPT]
                                           [--workspace-mount WORKSPACE_MOUNT]
                                           [-h]
Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--name

The name of the cluster

--scheduler-dashboard-address

The dashboard address for the scheduler. Only used for 'dask' cluster type.

--scheduler-port

The port to use for the scheduler. Only used for 'dask' cluster type.

--scheduler-startup-script

The startup script for the scheduler

--scheduler-reserved-gpus

The number of GPUs reserved for the scheduler

--scheduler-env-var

An environment variable to be set in the scheduler node. It must be in the format 'var_name:value'. You may define more than one envrionment variable by specifying multiple '--scheduler-env-var' arguments.

--worker-dashboard-address

The dashboard address for the worker. Only used for 'dask' cluster type.

--worker-instance-type

The instance type of the worker. Required for 'dask' cluster type; ignored otherwise.

--worker-startup-script

The startup script for the worker. Only used for 'dask' cluster type.

--worker-reserved-gpus

The number of GPUs reserved for the worker. Only used for 'dask' cluster type.

--worker-env-var

An environment variable to be set in the worker node. It must be in the format 'var_name:value'. You may define more than one envrionment variable by specifying multiple '--worker-env-var' arguments. Only used for 'dask' cluster type.

--system-packages

List of packages to install on the scheduler and worker using 'apt install' or 'yum install' command. (apt or yum command is chosen depending on the flavour of the Linux image in the Container). You may define more than one package by specifying multiple '--system-packages' arguments.

--conda-packages

List of packages to install on the scheduler and worker using 'conda install' command. You may define more than one package by specifying multiple '--conda-packages' arguments.

--pip-packages

List of packages to install on the scheduler and worker using 'pip install' command. You may define more than one package by specifying multiple '--pip-packages' arguments.

--expiry-duration

The expiry duration for the cluster. The format is <num>X, where 'X' is a single letter representing the unit of time: <d|h|m|s>, for days, hours, minutes, and seconds, respectively.

--additional-open-ports

(deprecated; use additional-port-mappings instead) Any additional ports to open for the cluster. You may include more than one additional port mapping by specifying multiple --additional-open-ports arguments.

--additional-port-mappings

Additional ports to open on the cluster. Mappings should be in the format '[name:]port[/protocol]'. If protocol is not specified, HTTPS will be used. The name portion cannot be included for HTTPS and GRPC protocols; it is required for the others. Valid protocols: (TCP|UDP|HTTPS|GRPC). You may include more than one additional port mapping by specifying multiple --additional-port-mappings arguments.

--dataset-mount

A mount point for a dataset. Enter in the format of: '<id>:<mount point path>' The 'id' value must be an integer. You may include more than one dataset mountpoint by specifying multiple '--dataset-mount' arguments.

--workspace-mount

A mount point for a workspace. Enter in the format of: '<id>:<mount point path>:<rw>' The 'rw' value should be 'true' if the mount is read/write, and 'false' if it is read-only. You may include more than one workspace mountpoint by specifying multiple '--workspace-mount' arguments.

--label

A user/reserved/system label that describes this job. You may define more than one label by specifying multiple '--label' arguments.

--labels-locked

Labels will not be able to be changed. Default=False

--multi-node

Only used for jupyterlab cluster types. Determines if the cluster is multi-node or not. Default=False

--job-order

Order of the job; from 1 to 99.

--job-priority

Priority of the job; choose from 'LOW', 'NORMAL', or 'HIGH'. Default='NORMAL'

--min-availability

Minimum replicas that need to be scheduled to start a multi-node job.

--min-time-slice

Minimum duration (in the format [nD][nH][nM][nS]) the job is expected (not guaranteed) to be in the RUNNING state once scheduled to assure forward progress.

--options

A custom envronment variable to add to job in the form of a key-pair. A key name must be between 1-63 characters and contain letters, numbers or './-_'. May be used multiple times in the same command.

--preempt-class

Describes the job class for preemption and scheduling behavior. It must be one of 'RESUMABLE', 'RESTARTABLE', or 'RUNONCE' (default).

--topology-constraint

Possible choices: any, pack

Specifies a topology constraint for the job. Only available for multi-node jobs. Choices: pack, any.

--user-secret

Specify secret name for the jab. Format: '<secret[:key_name:alias_name]>'. Multiple secret arguments are allowed. Unless specified all key value pairs will be included in the job. Overrides of the key are available optionally per key-value pair.

Required named arguments
--cluster-type

Possible choices: dask, jupyterlab

The type of cluster: choose from ['dask', 'jupyterlab'].

--scheduler-instance-type

The instance type of the scheduler

--nworkers

Number of workers in the cluster

--container-image

The container image to use

--data-output-mount-point

The path to where the data output will be mounted

--cluster-lifetime

The lifetime for the cluster. The format is <num>X, where 'X' is a single letter representing the unit of time: <d|h|m|s>, for days, hours, minutes, and seconds, respectively.

update

Update an existing cluster

ngc base-command quickstart cluster update
                                           [--additional-open-ports ADDITIONAL_OPEN_PORTS]
                                           [--additional-port-mappings ADDITIONAL_PORT_MAPPINGS]
                                           [--cluster-lifetime CLUSTER_LIFETIME]
                                           [--conda-packages CONDA_PACKAGES]
                                           [--container-image CONTAINER_IMAGE]
                                           [--data-output-mount-point DATA_OUTPUT_MOUNT_POINT]
                                           [--dataset-mount DATASET_MOUNT]
                                           [--debug]
                                           [--expiry-duration EXPIRY_DURATION]
                                           [--format_type <fmt>]
                                           [--job-order JOB_ORDER]
                                           [--job-priority JOB_PRIORITY]
                                           [--label LABEL] [--labels-locked]
                                           [--min-availability MIN_AVAILABILITY]
                                           [--min-time-slice MIN_TIME_SLICE]
                                           [--multi-node] [--name NAME]
                                           [--nworkers NWORKERS]
                                           [--options <key:value>]
                                           [--pip-packages PIP_PACKAGES]
                                           [--preempt-class PREEMPT_CLASS]
                                           [--remove-dataset-mounts]
                                           [--remove-workspace-mounts]
                                           [--scheduler-dashboard-address SCHEDULER_DASHBOARD_ADDRESS]
                                           [--scheduler-env-var SCHEDULER_ENV_VAR]
                                           [--scheduler-instance-type SCHEDULER_INSTANCE_TYPE]
                                           [--scheduler-port SCHEDULER_PORT]
                                           [--scheduler-reserved-gpus SCHEDULER_RESERVED_GPUS]
                                           [--scheduler-startup-script SCHEDULER_STARTUP_SCRIPT]
                                           [--system-packages SYSTEM_PACKAGES]
                                           [--topology-constraint {any,pack}]
                                           [--user-secret <secret[:key_name:alias_name]>]
                                           [--worker-dashboard-address WORKER_DASHBOARD_ADDRESS]
                                           [--worker-env-var WORKER_ENV_VAR]
                                           [--worker-instance-type WORKER_INSTANCE_TYPE]
                                           [--worker-reserved-gpus WORKER_RESERVED_GPUS]
                                           [--worker-startup-script WORKER_STARTUP_SCRIPT]
                                           [--workspace-mount WORKSPACE_MOUNT]
                                           [-h]
                                           cluster_id
Positional Arguments
cluster_id

The ID of the cluster

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--name

The name of the cluster

--scheduler-dashboard-address

The dashboard address for the scheduler. Only used for 'dask' cluster type.

--scheduler-instance-type

The instance type of the scheduler

--scheduler-port

The port to use for the scheduler. Only used for 'dask' cluster type.

--scheduler-startup-script

The startup script for the scheduler

--scheduler-reserved-gpus

The number of GPUs reserved for the scheduler

--scheduler-env-var

An environment variable to be set in the scheduler node. It must be in the format 'var_name:value'. You may define more than one envrionment variable by specifying multiple '--scheduler-env-var' arguments.

--worker-dashboard-address

The dashboard address for the worker. Only used for 'dask' cluster type.

--worker-instance-type

The instance type of the worker. Only used for 'dask' cluster type.

--worker-startup-script

The startup script for the worker. Only used for 'dask' cluster type.

--worker-reserved-gpus

The number of GPUs reserved for the worker. Only used for 'dask' cluster type.

--worker-env-var

An environment variable to be set in the worker node. It must be in the format 'var_name:value'. You may define more than one envrionment variable by specifying multiple '--worker-env-var' arguments. Only used for 'dask' cluster type.

--system-packages

List of packages to install on the scheduler and worker using 'apt install' or 'yum install' command. (apt or yum command is chosen depending on the flavour of the Linux image in the Container). You may define more than one package by specifying multiple '--system-packages' arguments.

--conda-packages

List of packages to install on the scheduler and worker using 'conda install' command. You may define more than one package by specifying multiple '--conda-packages' arguments.

--pip-packages

List of packages to install on the scheduler and worker using 'pip install' command. You may define more than one package by specifying multiple '--pip-packages' arguments.

--nworkers

Number of workers in the cluster

--container-image

The container image to use

--data-output-mount-point

The path to where the data output will be mounted

--cluster-lifetime

The lifetime for the cluster. The format is <num>X, where 'X' is a single letter representing the unit of time: <d|h|m|s>, for days, hours, minutes, and seconds, respectively.

--expiry-duration

The expiry duration for the cluster. The format is <num>X, where 'X' is a single letter representing the unit of time: <d|h|m|s>, for days, hours, minutes, and seconds, respectively.

--additional-open-ports

(deprecated; use additional-port-mappings instead) Any additional ports to open for the cluster. You may include more than one additional port mapping by specifying multiple --additional-open-ports arguments.

--additional-port-mappings

Additional ports to open on the cluster. Mappings should be in the format '[name:]port[/protocol]'. If protocol is not specified, HTTPS will be used. The name portion cannot be included for HTTPS and GRPC protocols; it is required for the others. Valid protocols: (TCP|UDP|HTTPS|GRPC). You may include more than one additional port mapping by specifying multiple --additional-port-mappings arguments.

--dataset-mount

A mount point for a dataset. Enter in the format of: '<id>:<mount point path>' The 'id' value must be an integer. You may include more than one dataset mountpoint by specifying multiple '--dataset-mount' arguments.

--workspace-mount

A mount point for a workspace. Enter in the format of: '<id>:<mount point path>:<rw>' The 'rw' value should be 'true' if the mount is read/write, and 'false' if it is read-only. You may include more than one workspace mountpoint by specifying multiple '--workspace-mount' arguments.

--label

A user/reserved/system label that describes this job. You may define more than one label by specifying multiple '--label' arguments.

--labels-locked

Labels will not be able to be changed. Default=False

--multi-node

Is this a multi-node cluster? Default=False

--job-order

Order of the job; from 1 to 99.

--job-priority

Priority of the job; choose from 'LOW', 'NORMAL', or 'HIGH'. Default='NORMAL'

--min-availability

Minimum replicas that need to be scheduled to start a multi-node job.

--min-time-slice

Minimum duration (in the format [nD][nH][nM][nS]) the job is expected (not guaranteed) to be in the RUNNING state once scheduled to assure forward progress.

--options

A custom envronment variable to add to job in the form of a key-pair. A key name must be between 1-63 characters and contain letters, numbers or './-_'. May be used multiple times in the same command.

--preempt-class

Describes the job class for preemption and scheduling behavior. It must be one of 'RESUMABLE', 'RESTARTABLE', or 'RUNONCE' (default).

--topology-constraint

Possible choices: any, pack

Specifies a topology constraint for the job. Only available for multi-node jobs. Choices: pack, any.

--user-secret

Specify secret name for the jab. Format: '<secret[:key_name:alias_name]>'. Multiple secret arguments are allowed. Unless specified all key value pairs will be included in the job. Overrides of the key are available optionally per key-value pair.

--remove-dataset-mounts

Remove any existing dataset mounts for this cluster

--remove-workspace-mounts

Remove any existing workspace mounts for this cluster

project

QuickStart Project Commands

ngc base-command quickstart project [--debug] [--format_type <fmt>] [-h]
                                    {cluster-create,cluster-remove,create,create-template,info,info-template,list,list-templates,remove,remove-template,update,update-template}
                                    ...
Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

project
{cluster-create,cluster-remove,create,create-template,info,info-template,list,list-templates,remove,remove-template,update,update-template}

Possible choices: cluster-create, cluster-remove, create, create-template, info, info-template, list, list-templates, remove, remove-template, update, update-template

Sub-commands
list

List projects.

ngc base-command quickstart project list [--column <column>] [--debug]
                                         [--format_type <fmt>] [--org-only]
                                         [--owned] [-h]
Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--column

Specify output column as column[=header], header is optional, default is name[=Name]. Valid columns are ace[=ACE], description[=Description], id[=ID], name[=Name], org[=Org], owner[=Owner], team[=Team]. Use quotes with spaces. Multiple column arguments are allowed.

--org-only

Don't return clusters created at the team level

--owned

Only return clusters I own (admin only)

create

Create a new project

ngc base-command quickstart project create [--debug] --description DESCRIPTION
                                           [--format_type <fmt>] --name NAME
                                           [--owner OWNER] [-h]
Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--owner

The owner of the project. If not specified, the email for the current user will be used.

Required named arguments
--name

The name of the project

--description

A desription of the project

update

Update a project

ngc base-command quickstart project update [--debug]
                                           [--description DESCRIPTION]
                                           [--format_type <fmt>] [--name NAME]
                                           [-h]
                                           project_id
Positional Arguments
project_id

The ID of the project

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--name

The name of the project

--description

A desription of the project

info

Show information about a project

ngc base-command quickstart project info [--debug] [--format_type <fmt>] [-h]
                                         project_id
Positional Arguments
project_id

The ID of the project

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

remove

Delete a project

ngc base-command quickstart project remove [--debug] [--format_type <fmt>]
                                           [-h]
                                           project_id
Positional Arguments
project_id

The ID of the project

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

cluster-create

Create a new cluster for the specified project

ngc base-command quickstart project cluster-create
                                                   [--additional-open-ports ADDITIONAL_OPEN_PORTS]
                                                   [--additional-port-mappings ADDITIONAL_PORT_MAPPINGS]
                                                   --cluster-lifetime
                                                   CLUSTER_LIFETIME
                                                   --cluster-type
                                                   {dask,jupyterlab}
                                                   [--conda-packages CONDA_PACKAGES]
                                                   --container-image
                                                   CONTAINER_IMAGE
                                                   --data-output-mount-point
                                                   DATA_OUTPUT_MOUNT_POINT
                                                   [--dataset-mount DATASET_MOUNT]
                                                   [--debug]
                                                   [--expiry-duration EXPIRY_DURATION]
                                                   [--format_type <fmt>]
                                                   [--job-order JOB_ORDER]
                                                   [--job-priority JOB_PRIORITY]
                                                   [--label LABEL]
                                                   [--labels-locked]
                                                   [--min-availability MIN_AVAILABILITY]
                                                   [--min-time-slice MIN_TIME_SLICE]
                                                   [--multi-node]
                                                   [--name NAME] --nworkers
                                                   NWORKERS
                                                   [--options <key:value>]
                                                   [--pip-packages PIP_PACKAGES]
                                                   [--preempt-class PREEMPT_CLASS]
                                                   [--scheduler-dashboard-address SCHEDULER_DASHBOARD_ADDRESS]
                                                   [--scheduler-env-var SCHEDULER_ENV_VAR]
                                                   --scheduler-instance-type
                                                   SCHEDULER_INSTANCE_TYPE
                                                   [--scheduler-port SCHEDULER_PORT]
                                                   [--scheduler-reserved-gpus SCHEDULER_RESERVED_GPUS]
                                                   [--scheduler-startup-script SCHEDULER_STARTUP_SCRIPT]
                                                   [--system-packages SYSTEM_PACKAGES]
                                                   [--topology-constraint {any,pack}]
                                                   [--user-secret <secret[:key_name:alias_name]>]
                                                   [--worker-dashboard-address WORKER_DASHBOARD_ADDRESS]
                                                   [--worker-env-var WORKER_ENV_VAR]
                                                   [--worker-instance-type WORKER_INSTANCE_TYPE]
                                                   [--worker-reserved-gpus WORKER_RESERVED_GPUS]
                                                   [--worker-startup-script WORKER_STARTUP_SCRIPT]
                                                   [--workspace-mount WORKSPACE_MOUNT]
                                                   [-h]
                                                   project_id
Positional Arguments
project_id

The ID of the project

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--name

The name of the cluster

--scheduler-dashboard-address

The dashboard address for the scheduler. Only used for 'dask' cluster type.

--scheduler-port

The port to use for the scheduler. Only used for 'dask' cluster type.

--scheduler-startup-script

The startup script for the scheduler

--scheduler-reserved-gpus

The number of GPUs reserved for the scheduler

--scheduler-env-var

An environment variable to be set in the scheduler node. It must be in the format 'var_name:value'. You may define more than one envrionment variable by specifying multiple '--scheduler-env-var' arguments.

--worker-dashboard-address

The dashboard address for the worker. Only used for 'dask' cluster type.

--worker-instance-type

The instance type of the worker. Required for 'dask' cluster type; ignored otherwise.

--worker-startup-script

The startup script for the worker. Only used for 'dask' cluster type.

--worker-reserved-gpus

The number of GPUs reserved for the worker. Only used for 'dask' cluster type.

--worker-env-var

An environment variable to be set in the worker node. It must be in the format 'var_name:value'. You may define more than one envrionment variable by specifying multiple '--worker-env-var' arguments. Only used for 'dask' cluster type.

--system-packages

List of packages to install on the scheduler and worker using 'apt install' or 'yum install' command. (apt or yum command is chosen depending on the flavour of the Linux image in the Container). You may define more than one package by specifying multiple '--system-packages' arguments.

--conda-packages

List of packages to install on the scheduler and worker using 'conda install' command. You may define more than one package by specifying multiple '--conda-packages' arguments.

--pip-packages

List of packages to install on the scheduler and worker using 'pip install' command. You may define more than one package by specifying multiple '--pip-packages' arguments.

--expiry-duration

The expiry duration for the cluster. The format is <num>X, where 'X' is a single letter representing the unit of time: <d|h|m|s>, for days, hours, minutes, and seconds, respectively.

--additional-open-ports

(deprecated; use additional-port-mappings instead) Any additional ports to open for the cluster. You may include more than one additional port mapping by specifying multiple --additional-open-ports arguments.

--additional-port-mappings

Additional ports to open on the cluster. Mappings should be in the format '[name:]port[/protocol]'. If protocol is not specified, HTTPS will be used. The name portion cannot be included for HTTPS and GRPC protocols; it is required for the others. Valid protocols: (TCP|UDP|HTTPS|GRPC). You may include more than one additional port mapping by specifying multiple --additional-port-mappings arguments.

--dataset-mount

A mount point for a dataset. Enter in the format of: '<id>:<mount point path>' The 'id' value must be an integer. You may include more than one dataset mountpoint by specifying multiple '--dataset-mount' arguments.

--workspace-mount

A mount point for a workspace. Enter in the format of: '<id>:<mount point path>:<rw>' The 'rw' value should be 'true' if the mount is read/write, and 'false' if it is read-only. You may include more than one workspace mountpoint by specifying multiple '--workspace-mount' arguments.

--label

A user/reserved/system label that describes this job. You may define more than one label by specifying multiple '--label' arguments.

--labels-locked

Labels will not be able to be changed. Default=False

--multi-node

Only used for jupyterlab cluster types. Determines if the cluster is multi-node or not. Default=False

--job-order

Order of the job; from 1 to 99.

--job-priority

Priority of the job; choose from 'LOW', 'NORMAL', or 'HIGH'. Default='NORMAL'

--min-availability

Minimum replicas that need to be scheduled to start a multi-node job.

--min-time-slice

Minimum duration (in the format [nD][nH][nM][nS]) the job is expected (not guaranteed) to be in the RUNNING state once scheduled to assure forward progress.

--options

A custom envronment variable to add to job in the form of a key-pair. A key name must be between 1-63 characters and contain letters, numbers or './-_'. May be used multiple times in the same command.

--preempt-class

Describes the job class for preemption and scheduling behavior. It must be one of 'RESUMABLE', 'RESTARTABLE', or 'RUNONCE' (default).

--topology-constraint

Possible choices: any, pack

Specifies a topology constraint for the job. Only available for multi-node jobs. Choices: pack, any.

--user-secret

Specify secret name for the jab. Format: '<secret[:key_name:alias_name]>'. Multiple secret arguments are allowed. Unless specified all key value pairs will be included in the job. Overrides of the key are available optionally per key-value pair.

Required named arguments
--cluster-type

Possible choices: dask, jupyterlab

The type of cluster: choose from ['dask', 'jupyterlab'].

--scheduler-instance-type

The instance type of the scheduler

--nworkers

Number of workers in the cluster

--container-image

The container image to use

--data-output-mount-point

The path to where the data output will be mounted

--cluster-lifetime

The lifetime for the cluster. The format is <num>X, where 'X' is a single letter representing the unit of time: <d|h|m|s>, for days, hours, minutes, and seconds, respectively.

cluster-remove

Shutdown and delete a cluster from a project

ngc base-command quickstart project cluster-remove --cluster-id CLUSTER_ID
                                                   [--debug]
                                                   [--format_type <fmt>] [-h]
                                                   project_id
Positional Arguments
project_id

The ID of the project

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

Required named arguments
--cluster-id

The ID of the cluster to remove

create-template

Create a new project template

ngc base-command quickstart project create-template
                                                    [--additional-open-ports ADDITIONAL_OPEN_PORTS]
                                                    [--additional-port-mappings ADDITIONAL_PORT_MAPPINGS]
                                                    --cluster-lifetime
                                                    CLUSTER_LIFETIME
                                                    [--cluster-name CLUSTER_NAME]
                                                    --cluster-type
                                                    {dask,jupyterlab}
                                                    [--conda-packages CONDA_PACKAGES]
                                                    --container-image
                                                    CONTAINER_IMAGE
                                                    --data-output-mount-point
                                                    DATA_OUTPUT_MOUNT_POINT
                                                    [--dataset-mount DATASET_MOUNT]
                                                    [--debug] [--default]
                                                    --description DESCRIPTION
                                                    --display-image-url
                                                    DISPLAY_IMAGE_URL
                                                    [--expiry-duration EXPIRY_DURATION]
                                                    [--format_type <fmt>]
                                                    [--job-order JOB_ORDER]
                                                    [--job-priority JOB_PRIORITY]
                                                    [--label LABEL]
                                                    [--labels-locked]
                                                    [--min-availability MIN_AVAILABILITY]
                                                    [--min-time-slice MIN_TIME_SLICE]
                                                    [--multi-node] --name NAME
                                                    --nworkers NWORKERS
                                                    [--options <key:value>]
                                                    [--pip-packages PIP_PACKAGES]
                                                    [--preempt-class PREEMPT_CLASS]
                                                    [--scheduler-dashboard-address SCHEDULER_DASHBOARD_ADDRESS]
                                                    [--scheduler-env-var SCHEDULER_ENV_VAR]
                                                    --scheduler-instance-type
                                                    SCHEDULER_INSTANCE_TYPE
                                                    [--scheduler-port SCHEDULER_PORT]
                                                    [--scheduler-reserved-gpus SCHEDULER_RESERVED_GPUS]
                                                    [--scheduler-startup-script SCHEDULER_STARTUP_SCRIPT]
                                                    [--system-packages SYSTEM_PACKAGES]
                                                    [--topology-constraint {any,pack}]
                                                    [--user-secret <secret[:key_name:alias_name]>]
                                                    [--worker-dashboard-address WORKER_DASHBOARD_ADDRESS]
                                                    [--worker-env-var WORKER_ENV_VAR]
                                                    [--worker-instance-type WORKER_INSTANCE_TYPE]
                                                    [--worker-reserved-gpus WORKER_RESERVED_GPUS]
                                                    [--worker-startup-script WORKER_STARTUP_SCRIPT]
                                                    [--workspace-mount WORKSPACE_MOUNT]
                                                    [-h]
Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--cluster-name

The name of the cluster

--expiry-duration

The expiry duration for the cluster. The format is <num>X, where 'X' is a single letter representing the unit of time: <d|h|m|s>, for days, hours, minutes, and seconds, respectively.

--additional-open-ports

(deprecated; use additional-port-mappings instead) Any additional ports to open for the cluster. You may include more than one additional port mapping by specifying multiple --additional-open-ports arguments.

--additional-port-mappings

Additional ports to open on the cluster. Mappings should be in the format '[name:]port[/protocol]'. If protocol is not specified, HTTPS will be used. The name portion cannot be included for HTTPS and GRPC protocols; it is required for the others. Valid protocols: (TCP|UDP|HTTPS|GRPC). You may include more than one additional port mapping by specifying multiple --additional-port-mappings arguments.

--scheduler-dashboard-address

The dashboard address for the scheduler. Only used for 'dask' cluster type.

--scheduler-port

The port to use for the scheduler. Only used for 'dask' cluster type.

--scheduler-startup-script

The startup script for the scheduler

--scheduler-reserved-gpus

The number of GPUs reserved for the scheduler

--scheduler-env-var

An environment variable to be set in the scheduler node. It must be in the format 'var_name:value'. You may define more than one envrionment variable by specifying multiple '--scheduler-env-var' arguments.

--worker-dashboard-address

The dashboard address for the worker. Only used for 'dask' cluster type.

--worker-instance-type

The instance type of the worker. Required for 'dask' cluster type; ignored otherwise.

--worker-startup-script

The startup script for the worker. Only used for 'dask' cluster type.

--worker-reserved-gpus

The number of GPUs reserved for the worker. Only used for 'dask' cluster type.

--worker-env-var

An environment variable to be set in the worker node. It must be in the format 'var_name:value'. You may define more than one envrionment variable by specifying multiple '--worker-env-var' arguments. Only used for 'dask' cluster type.

--system-packages

List of packages to install on the scheduler and worker using 'apt install' or 'yum install' command. (apt or yum command is chosen depending on the flavour of the Linux image in the Container). You may define more than one package by specifying multiple '--system-packages' arguments.

--conda-packages

List of packages to install on the scheduler and worker using 'conda install' command. You may define more than one package by specifying multiple '--conda-packages' arguments.

--pip-packages

List of packages to install on the scheduler and worker using 'pip install' command. You may define more than one package by specifying multiple '--pip-packages' arguments.

--dataset-mount

A mount point for a dataset. Enter in the format of: '<id>:<mount point path>' The 'id' value must be an integer. You may include more than one dataset mountpoint by specifying multiple '--dataset-mount' arguments.

--workspace-mount

A mount point for a workspace. Enter in the format of: '<id>:<mount point path>:<rw>' The 'rw' value should be 'true' if the mount is read/write, and 'false' if it is read-only. You may include more than one workspace mountpoint by specifying multiple '--workspace-mount' arguments.

--label

A user/reserved/system label that describes this job. You may define more than one label by specifying multiple '--label' arguments.

--labels-locked

Labels will not be able to be changed. Default=False

--default

Set this template as the default for the cluster type. Admin only

--multi-node

Only used for jupyterlab cluster types. Determines if the cluster is multi-node or not. Default=False

--job-order

Order of the job; from 1 to 99.

--job-priority

Priority of the job; choose from 'LOW', 'NORMAL', or 'HIGH'. Default='NORMAL'

--min-availability

Minimum replicas that need to be scheduled to start a multi-node job.

--min-time-slice

Minimum duration (in the format [nD][nH][nM][nS]) the job is expected (not guaranteed) to be in the RUNNING state once scheduled to assure forward progress.

--options

A custom envronment variable to add to job in the form of a key-pair. A key name must be between 1-63 characters and contain letters, numbers or './-_'. May be used multiple times in the same command.

--preempt-class

Describes the job class for preemption and scheduling behavior. It must be one of 'RESUMABLE', 'RESTARTABLE', or 'RUNONCE' (default).

--topology-constraint

Possible choices: any, pack

Specifies a topology constraint for the job. Only available for multi-node jobs. Choices: pack, any.

--user-secret

Specify secret name for the jab. Format: '<secret[:key_name:alias_name]>'. Multiple secret arguments are allowed. Unless specified all key value pairs will be included in the job. Overrides of the key are available optionally per key-value pair.

Required named arguments
--name

The name of the project template

--description

A desription of the project template

--cluster-type

Possible choices: dask, jupyterlab

The type of cluster: choose from ['dask', 'jupyterlab'].

--container-image

Container image for the template

--display-image-url

URL of the image to display for the template

--nworkers

Number of workers

--cluster-lifetime

The lifetime for the cluster. The format is <num>X, where 'X' is a single letter representing the unit of time: <d|h|m|s>, for days, hours, minutes, and seconds, respectively.

--scheduler-instance-type

The instance type of the scheduler

--data-output-mount-point

The path to where the data output will be mounted

update-template

Modify a project template

ngc base-command quickstart project update-template
                                                    [--additional-open-ports ADDITIONAL_OPEN_PORTS]
                                                    [--additional-port-mappings ADDITIONAL_PORT_MAPPINGS]
                                                    [--cluster-lifetime CLUSTER_LIFETIME]
                                                    [--cluster-name CLUSTER_NAME]
                                                    [--conda-packages CONDA_PACKAGES]
                                                    [--container-image CONTAINER_IMAGE]
                                                    [--data-output-mount-point DATA_OUTPUT_MOUNT_POINT]
                                                    [--dataset-mount DATASET_MOUNT]
                                                    [--debug] [--default]
                                                    [--description DESCRIPTION]
                                                    [--display-image-url DISPLAY_IMAGE_URL]
                                                    [--expiry-duration EXPIRY_DURATION]
                                                    [--format_type <fmt>]
                                                    [--job-order JOB_ORDER]
                                                    [--job-priority JOB_PRIORITY]
                                                    [--label LABEL]
                                                    [--labels-locked]
                                                    [--min-availability MIN_AVAILABILITY]
                                                    [--min-time-slice MIN_TIME_SLICE]
                                                    [--multi-node]
                                                    [--name NAME]
                                                    [--nworkers NWORKERS]
                                                    [--options <key:value>]
                                                    [--pip-packages PIP_PACKAGES]
                                                    [--preempt-class PREEMPT_CLASS]
                                                    [--remove-dataset-mounts]
                                                    [--remove-default]
                                                    [--remove-workspace-mounts]
                                                    [--scheduler-dashboard-address SCHEDULER_DASHBOARD_ADDRESS]
                                                    [--scheduler-env-var SCHEDULER_ENV_VAR]
                                                    [--scheduler-instance-type SCHEDULER_INSTANCE_TYPE]
                                                    [--scheduler-port SCHEDULER_PORT]
                                                    [--scheduler-reserved-gpus SCHEDULER_RESERVED_GPUS]
                                                    [--scheduler-startup-script SCHEDULER_STARTUP_SCRIPT]
                                                    [--system-packages SYSTEM_PACKAGES]
                                                    [--topology-constraint {any,pack}]
                                                    [--user-secret <secret[:key_name:alias_name]>]
                                                    [--worker-dashboard-address WORKER_DASHBOARD_ADDRESS]
                                                    [--worker-env-var WORKER_ENV_VAR]
                                                    [--worker-instance-type WORKER_INSTANCE_TYPE]
                                                    [--worker-reserved-gpus WORKER_RESERVED_GPUS]
                                                    [--worker-startup-script WORKER_STARTUP_SCRIPT]
                                                    [--workspace-mount WORKSPACE_MOUNT]
                                                    [-h]
                                                    template_id
Positional Arguments
template_id

The ID of the template

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--name

The name of the project template

--cluster-name

The name of the cluster

--description

A desription of the project template

--container-image

Container image for the template

--display-image-url

URL of the image to display for the template

--nworkers

Number of workers

--cluster-lifetime

The lifetime for the cluster. The format is <num>X, where 'X' is a single letter representing the unit of time: <d|h|m|s>, for days, hours, minutes, and seconds, respectively.

--expiry-duration

The expiry duration for the cluster. The format is <num>X, where 'X' is a single letter representing the unit of time: <d|h|m|s>, for days, hours, minutes, and seconds, respectively.

--additional-open-ports

(deprecated; use additional-port-mappings instead) Any additional ports to open for the cluster. You may include more than one additional port mapping by specifying multiple --additional-open-ports arguments.

--additional-port-mappings

Additional ports to open on the cluster. Mappings should be in the format '[name:]port[/protocol]'. If protocol is not specified, HTTPS will be used. The name portion cannot be included for HTTPS and GRPC protocols; it is required for the others. Valid protocols: (TCP|UDP|HTTPS|GRPC). You may include more than one additional port mapping by specifying multiple --additional-port-mappings arguments.

--scheduler-dashboard-address

The dashboard address for the scheduler. Only used for 'dask' cluster type.

--scheduler-instance-type

The instance type of the scheduler

--scheduler-port

The port to use for the scheduler. Only used for 'dask' cluster type.

--scheduler-startup-script

The startup script for the scheduler

--scheduler-reserved-gpus

The number of GPUs reserved for the scheduler

--scheduler-env-var

An environment variable to be set in the scheduler node. It must be in the format 'var_name:value'. You may define more than one envrionment variable by specifying multiple '--scheduler-env-var' arguments.

--worker-dashboard-address

The dashboard address for the worker. Only used for 'dask' cluster type.

--worker-instance-type

The instance type of the worker. Only used for 'dask' cluster type.

--worker-startup-script

The startup script for the worker. Only used for 'dask' cluster type.

--worker-reserved-gpus

The number of GPUs reserved for the worker. Only used for 'dask' cluster type.

--worker-env-var

An environment variable to be set in the worker node. It must be in the format 'var_name:value'. You may define more than one envrionment variable by specifying multiple '--worker-env-var' arguments. Only used for 'dask' cluster type.

--system-packages

List of packages to install on the scheduler and worker using 'apt install' or 'yum install' command. (apt or yum command is chosen depending on the flavour of the Linux image in the Container). You may define more than one package by specifying multiple '--system-packages' arguments.

--conda-packages

List of packages to install on the scheduler and worker using 'conda install' command. You may define more than one package by specifying multiple '--conda-packages' arguments.

--pip-packages

List of packages to install on the scheduler and worker using 'pip install' command. You may define more than one package by specifying multiple '--pip-packages' arguments.

--data-output-mount-point

The path to where the data output will be mounted

--dataset-mount

A mount point for a dataset. Enter in the format of: '<id>:<mount point path>' The 'id' value must be an integer. You may include more than one dataset mountpoint by specifying multiple '--dataset-mount' arguments.

--workspace-mount

A mount point for a workspace. Enter in the format of: '<id>:<mount point path>:<rw>' The 'rw' value should be 'true' if the mount is read/write, and 'false' if it is read-only. You may include more than one workspace mountpoint by specifying multiple '--workspace-mount' arguments.

--label

A user/reserved/system label that describes this job. You may define more than one label by specifying multiple '--label' arguments.

--labels-locked

Labels will not be able to be changed. Default=False

--default

Set this template as the default for the cluster type. Admin only

--remove-default

Unmark this template as the default for this template's cluster type. Admin only

--multi-node

Only used for jupyterlab cluster types. Determines if the cluster is multi-node or not. Default=False

--job-order

Order of the job; from 1 to 99.

--job-priority

Priority of the job; choose from 'LOW', 'NORMAL', or 'HIGH'. Default='NORMAL'

--min-availability

Minimum replicas that need to be scheduled to start a multi-node job.

--min-time-slice

Minimum duration (in the format [nD][nH][nM][nS]) the job is expected (not guaranteed) to be in the RUNNING state once scheduled to assure forward progress.

--options

A custom envronment variable to add to job in the form of a key-pair. A key name must be between 1-63 characters and contain letters, numbers or './-_'. May be used multiple times in the same command.

--preempt-class

Describes the job class for preemption and scheduling behavior. It must be one of 'RESUMABLE', 'RESTARTABLE', or 'RUNONCE' (default).

--topology-constraint

Possible choices: any, pack

Specifies a topology constraint for the job. Only available for multi-node jobs. Choices: pack, any.

--user-secret

Specify secret name for the jab. Format: '<secret[:key_name:alias_name]>'. Multiple secret arguments are allowed. Unless specified all key value pairs will be included in the job. Overrides of the key are available optionally per key-value pair.

--remove-dataset-mounts

Remove any existing dataset mounts for this cluster

--remove-workspace-mounts

Remove any existing workspace mounts for this cluster

list-templates

List project templates.

ngc base-command quickstart project list-templates [--column <column>]
                                                   [--debug] [--default-only]
                                                   [--format_type <fmt>]
                                                   [--template-type TEMPLATE_TYPE]
                                                   [-h]
Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--column

Specify output column as column[=header], header is optional, default is name[=Name]. Valid columns are id[=ID], name[=Name], description[=Description], display-image-url[="Display Image"]. Use quotes with spaces. Multiple column arguments are allowed.

--default-only

Only list default template

--template-type

Type of template to show. Choices: dask, jupyterlab. Default='dask'

remove-template

Delete a project template

ngc base-command quickstart project remove-template [--debug]
                                                    [--format_type <fmt>] [-h]
                                                    template_id
Positional Arguments
template_id

The ID of the template

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

info-template

Get information about a project template

ngc base-command quickstart project info-template [--debug]
                                                  [--format_type <fmt>] [-h]
                                                  template_id
Positional Arguments
template_id

The ID of the template

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

resource

Resource Commands

ngc base-command resource [--debug] [--format_type <fmt>] [-h]  ...

Named Arguments

--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

resource

Possible choices: create, info, list, remove, telemetry, update

Sub-commands

list

List resources. Specify ace to list child pools, or 'no-ace' to list across aces.

ngc base-command resource list [--column <column>] [--debug]
                               [--format_type <fmt>] [--root-pool]
                               [--user-id <user_id>] [-h]
Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--user-id

Specify user id, use user-defaults for default pool.

--column

Specify output column as column[=header], header is optional, default is id[=Id]. Valid columns are id[=Id], poolType[=Type], description[=Description], resourceTypeName[="Resource Type"]. Use quotes with spaces. Multiple column arguments are allowed.

--root-pool

Use to list root pools instead.

info

Resource information.

ngc base-command resource info [--debug] [--format_type <fmt>] [--root-pool]
                               [--user-id <user_id>] [-h]
Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--user-id

Specify user id, use user-defaults for default pool.

--root-pool

Use to get root pool info instead.

create

Create Pool.

ngc base-command resource create [--allocation <allocation>] [--debug]
                                 [--default <default>]
                                 [--description <description>]
                                 [--format_type <fmt>] [--user-id <user_id>]
                                 [--version <version>] [-h]
Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--user-id

Specify user id, use user-defaults for default pool.

--description

Specify pool description.

--version

Specify pool version.

--allocation

Specify resource allocations for the pool. Format <type>:<limit>:<share>:<priority>. Either allocation or default is required.

--default

Specify resource defaults for the pool, required for creating team pool. Format <type>:<limit>:<share>:<priority>. Either allocation or default is required.

update

Update Pool.

ngc base-command resource update [--add-allocation <add_allocation>] [--debug]
                                 [--description <description>]
                                 [--format_type <fmt>]
                                 [--remove-allocation <remove_allocation>]
                                 [--update-allocation <update_allocation>]
                                 [--user-id <user_id>] [-h]
Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--description

Specify pool description.

--update-allocation

Specify resource allocations to update. Format <type>:<limit>:<share>:<priority>.

--add-allocation

Specify resource allocations to add to the pool. Format <type>:<limit>:<share>:<priority>.

--remove-allocation

Specify resource allocations to remove from the pool. Format <resource_type>.

--user-id

Specify user id, use user-defaults for default pool.

remove

Remove Pool.

ngc base-command resource remove [--debug] [--format_type <fmt>]
                                 [--user-id <user_id>] [-h] [-y]
Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--user-id

Specify user id, use user-defaults for default pool.

-y, --yes

Automatically say yes to all interactive questions.

telemetry

Resource telemetry.

ngc base-command resource telemetry [--debug] [--end-time <end_time>]
                                    [--format_type <fmt>]
                                    [--interval-time <interval_time>]
                                    [--interval-unit <interval_unit>]
                                    --resource-type <resource_type>
                                    [--root-pool] [--start-time <start-time>]
                                    [--telemetry-type <telemetry_type>]
                                    [--user-id <user_id>] [-h]
Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--user-id

Specify user id, use user-defaults for default pool.

--telemetry-type

Possible choices: ACTIVE_FAIR_SHARE, FAIR_SHARE, POOL_CAPACITY, POOL_LIMIT, QUEUED_JOBS, RESOURCE_USAGE, RESOURCE_UTILIZATION, RUNNING_JOBS, im_resource_manager_active_rcrs_per_pool_total, im_resource_manager_num_resources_consumed_total, im_resource_manager_num_resources_needed_total, im_resource_manager_pending_rcrs_per_pool_total, im_resource_manager_pool_limit_total, im_resource_manager_pool_reservation_total, im_resource_manager_pool_share_total

Specify type for telemetry. Options ['ACTIVE_FAIR_SHARE', 'FAIR_SHARE', 'POOL_CAPACITY', 'POOL_LIMIT', 'QUEUED_JOBS', 'RESOURCE_USAGE', 'RESOURCE_UTILIZATION', 'RUNNING_JOBS', 'im_resource_manager_active_rcrs_per_pool_total', 'im_resource_manager_num_resources_consumed_total', 'im_resource_manager_num_resources_needed_total', 'im_resource_manager_pending_rcrs_per_pool_total', 'im_resource_manager_pool_limit_total', 'im_resource_manager_pool_reservation_total', 'im_resource_manager_pool_share_total']

--end-time

Specifies the end time for statistics. Format: [yyyy-MM-dd::HH:mm:ss]. Default: now

--start-time

Specifies the start time for statistics. Format: [yyyy-MM-dd::HH:mm:ss].

--interval-unit

Possible choices: HOUR, MINUTE, SECOND

Data collection interval unit. Options: ['HOUR', 'MINUTE', 'SECOND']. Default: MINUTE

--interval-time

Data collection interval time value. Default: 1

--root-pool

Use to get root pool info instead.

Required named arguments
--resource-type

Filter data by resource type.

result

Job Result Commands

ngc base-command result [--debug] [--format_type <fmt>] [-h]  ...

Named Arguments

--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

result

Possible choices: download, info, remove

Sub-commands

info

List result file(s) generated by the successful execution of a job.

ngc base-command result info [--debug] [--files] [--format_type <fmt>] [-h]
                             <<job id>[:replica_id]>
Positional Arguments
<<job id>[:replica_id]>

Job ID

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--files

Include list of files with result details. The maximum number of files shown is 1000.

download

Download results by job ID.

ngc base-command result download [--debug] [--dest <path>] [--dir <wildcard>]
                                 [--dry-run] [--exclude <wildcard>]
                                 [--file <wildcard>] [--format_type <fmt>]
                                 [--resume <resume>] [--zip] [-h]
                                 <job_id>[:replica_id]>
Positional Arguments
<job_id>[:replica_id]>

Job ID

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--dest

Target destination for the downloaded file(s). Default: .

--file

File name(s) to be downloaded from the resultset. This flag can be supplied multiple times, and supports standard Unix shell-style wildcards like (?, [abc], [!a-z], etc..). This flag only filters based on the name of the file, not the path. The maximum number of files downloaded per filter is 1000.

--dir

Directory or directories to be downloaded from the resultset. This flag supports standard Unix shell-style wildcards like (?, [abc], [!a-z], etc..). The maximum number of files downloaded per filter is 1000.

--exclude

The file(s) or directory (directories) to be excluded from the download. This flag can be supplied multiple times, and supports standard Unix shell-style wildcards like (?, [abc], [!a-z], etc..)

--zip

Download the entire result directory as a zip file.

--dry-run

List total size of the download without performing the download.

--resume

Resume the download for the result. Specify the file name saved by the download. Files will be downloaded to the directory of the file name.

remove

Remove a result for given job ID(s).

ngc base-command result remove [--debug] [--dry-run] [--format_type <fmt>]
                               [--job-name <wildcard>] [--status <status>]
                               [-h] [-y]
                               <jobid|jobidrange|jobidlist|jobidpattern>
Positional Arguments
<jobid|jobidrange|jobidlist|jobidpattern>

Job ID(s). Valid examples: '1-5', '1,10-15', '*', '1?'

Named Arguments
--debug

Enable debug mode.

--format_type

Possible choices: ascii, csv, json

Specify the output format type. Supported formats are: ['ascii', 'csv', 'json']. Only commands that produce tabular data support csv format. Default: ascii

--status

Possible choices: CANCELED, FAILED, FAILED_RUN_LIMIT_EXCEEDED, FINISHED_SUCCESS, IM_INTERNAL_ERROR, INFINITY_POOL_MISSING, KILLED_BY_ADMIN, KILLED_BY_SYSTEM, KILLED_BY_USER, PENDING_TERMINATION, PREEMPTED, PREEMPTED_BY_ADMIN, RESOURCE_CONSUMPTION_REQUEST_IN_PROGRESS, RESOURCE_GRANTED, RESOURCE_GRANT_DENIED, RESOURCE_LIMIT_EXCEEDED, RESOURCE_RELEASED, RUNNING, TASK_LOST, UNKNOWN

Filter jobs matching input status. Multiple --status flags will OR together. Options: ['CANCELED', 'FAILED', 'FAILED_RUN_LIMIT_EXCEEDED', 'FINISHED_SUCCESS', 'IM_INTERNAL_ERROR', 'INFINITY_POOL_MISSING', 'KILLED_BY_ADMIN', 'KILLED_BY_SYSTEM', 'KILLED_BY_USER', 'PENDING_TERMINATION', 'PREEMPTED', 'PREEMPTED_BY_ADMIN', 'RESOURCE_CONSUMPTION_REQUEST_IN_PROGRESS', 'RESOURCE_GRANTED', 'RESOURCE_GRANT_DENIED', 'RESOURCE_LIMIT_EXCEEDED', 'RESOURCE_RELEASED', 'RUNNING', 'TASK_LOST', 'UNKNOWN']

-y, --yes

Automatically say yes to all interactive questions.

--dry-run

List matching input set without any action.

--job-name

Filter jobs with names matching specified pattern. Supports standard Unix shell-style wildcards like (?, [abc], [!a-z], etc..).

Examples

How to list ACE?

$ ngc base-command ace list

+-----------------------+-------+---------------------------------------+-----------------------------+
| ACE                   | Id    | Description                           | Instances                   |
+-----------------------+-------+---------------------------------------+-----------------------------+
| nvidia-exmpl-1        | 63915 | For testing group A                   | nvidia.exmpl.a2,            |
|                       |       |                                       | nvidia.exmpl.a6             |
| nvidia-exmpl-2        | 56392 | For testing group B                   | nvidia.exmpl.b7,            |
|                       |       |                                       | nvidia.exmpl.b5             |
+-----------------------+-------+---------------------------------------+-----------------------------+

How to list ACE using column arguments?

$ ngc base-command ace list --column id

+-----------------------+-------+
| ACE                   | Id    |
+-----------------------+-------+
| nvidia-exmpl-1        | 63915 |
| nvidia-exmpl-2        | 56392 |
+-----------------------+-------+

How to get ACE details?

$ ngc base-command ace info nvidia-exmpl-1

----------------------------------------------------
ACE Information
    Name: nvidia-exmpl-1
    Id: 63915
    Type: PUBLIC
    Created Date: 2018-03-04 05:01:00 UTC
    Created By:
    Description: For testing group A
    Auto Configuration Enabled: False
    Provider: NGN
    Storage Service Url: https://nvidia.com
    Proxy Service Url: https://nvidia.com
    Instances:
        Name: nvidia.exmpl.a2   GPUs:  1   GPU Mem: 16 GB   GPU Power:  160 W   CPUs:  8   System Mem:  50 GB
        Name: nvidia.exmpl.a6   GPUs:  2   GPU Mem: 16 GB   GPU Power:  160 W   CPUs: 16   System Mem: 100 GB
----------------------------------------------------

How to list dataset?

$ ngc base-command dataset list

+------------+--------------+------------------+--------+----------+-----------------+----------------------------+
| Dataset Id | Dataset Name | Description      | ACE Id | Size     | Status          | Created Date               |
+------------+--------------+------------------+--------+----------+-----------------+----------------------------+
| 3067       | mnist        | mnist is awesome | 53     | 11.06 MB | UPLOAD_COMPLETE | 2017-12-02 22:21:38 UTC +0 |
+------------+--------------+------------------+--------+----------+-----------------+----------------------------+

How to list dataset using column arguments?

$ ngc base-command dataset list --column id --column name --column size

+-----------------------+--------------+------------------+----------+
| Id                    | Integer Id   | Name             | Size     |
+-----------------------+--------------+------------------+----------+
| oSjpnu6GtEXRQZCqqDp31 | 3067         | mnist is awesome | 11.06 MB |
+-----------------------+--------------+------------------+----------+

How to upload dataset?

$ ngc base-command dataset upload --ace nv-exmpl-1 --desc "mnist is awesome" --source c:\dataset\mnist.zip mnist

Parsing list of files...
Number of files to be uploaded: 1
Creating dataset...
Dataset created id: 4597
Upload started: mnist.zip
2017-12-19 10:28:57:963000 Upload completed: mnist.zip Time taken: 3.28 seconds File size: 11600578 B
Total number of files uploaded 1/1
Dataset: 4597 Name: mnist Upload Completed
Dataset local path: c:\dataset\mnist.zip
Files uploaded: 1
Total Bytes transferred: 11600578 B
Started at: 2017-12-19 10:28:52.738000
Completed at: 2017-12-19 10:28:57.970000
Duration taken: 5.232 seconds
NOTE: It will take some time for dataset to be available for download.

How to view dataset info?

$ ngc base-command dataset info 4597

----------------------------------------------------
  Dataset Information
    Id: 4597
    Name: mnist
    Created By: FooBar
    Email: foobar@nvidia.com
    ACE: nv-exmpl-1
    Size: 11.6 MB
    Total Files: 1
    Status: UPLOAD_COMPLETE
    Description: mnist is awesome
    Owned: No
    Shared: Private
    Files:
        /mnist.zip
----------------------------------------------------

How to download dataset?

$ ngc base-command dataset download 9999

1 files to download, total size - 13 B
Downloaded 0 B, 0 files in 2s Download speed: 0 B/s
Downloaded 0 B, 0 files in 3s Download speed: 0 B/s
**********************************************************************
Dataset: 9999 Download status: Completed.
Downloaded local path: C:\Users\Admin\ngc-cli\9999
Total files downloaded: 1
Total downloaded size: 13 B
Started at: 2018-01-01 10:00:26.756000
Completed at: 2018-01-01 10:00:31.720000
Duration taken: 4s seconds
**********************************************************************

How to share dataset?

$ ngc base-command dataset share 4597 --team cosmos
$ ngc base-command dataset share 4597 --org nvidia

How to revoke dataset share?

$ ngc base-command dataset revoke-share 4597 --team cosmos
$ ngc base-command dataset revoke-share 4597 --org nvidia

How to remove dataset?

$ ngc base-command dataset remove 4597

How to convert result to dataset?

$ ngc base-command dataset convert --from-result 9999 dataset-name

How to submit a job?

$ ngc base-command job run --ace nv-exmpl-1 --instance ngcv1 --name "Test Run"
                --image "nvidia/pytorch:17.11" --datasetid 5586:/data
                --result /result --command "/bin/ls -aFl /data"

 Job created.
 ----------------------------------------------------
 Job Information
   Id: 813581
   Name: Test Run
   Job Type: BATCH
   Created At: 2019-09-09 20:28:32 UTC
   Submitted By: <your name>
 Job Container Information
   Docker Image URL: nvidia/pytorch:17.11
 Job Commands
   Command: /bin/ls -aFl /data
   Dockerfile Image Entrypoint: False
 Datasets, Workspaces and Results
   Dataset ID: 5586
       Dataset Mount Point: /data
   Result Mount Point: /result
 Job Resources
   Instance Type: ngcv1
   Instance Details: 1 GPU, 8 CPU, 50 GB System Memory
   ACE: nv-exmpl-1
   Cluster: prd0-257-sjc3.nvk8s.com
 Job Status
   Status: CREATED
   Preempt Class: RUNONCE
 ----------------------------------------------------

How to submit job using json file?

1. First upload the dataset to ACE.

$ ngc base-command dataset upload --ace nv-exmpl-1 --desc "mnist is awesome" --source c:\dataset\mnist.zip mnist

Parsing list of files...
Number of files to be uploaded: 1
Creating dataset...
Dataset created id: 4597
Upload started: mnist.zip
2017-12-19 10:28:57:963000 Upload completed: mnist.zip Time taken: 3.28 seconds File size: 11600578 B
Total number of files uploaded 1/1
Dataset: 4597 Name: mnist Upload Completed
Dataset local path: c:\dataset\mnist.zip
Files uploaded: 1
Total Bytes transferred: 11600578 B
Started at: 2017-12-19 10:28:52.738000
Duration taken: 5.232 seconds
Completed at: 2017-12-19 10:28:57.970000
NOTE: It will take some time for dataset to be available for download.

2. Create a tensorflow-mnist.json file with following json object. Make sure you use the dataset id generated
by uploading the dataset.

{
  "dockerImageName": "nvidia/tensorflow:17.10",
  "aceName": "nv-exmpl-1",
  "name": "tensorflow-mnist",
  "command": "cp -r /data /tmp/; cd /tmp/data; unzip -j mnist.zip; cd /src/TensorFlow-Examples/examples/3_NeuralNetworks; cp /tmp/data/input_data.py .; python /src/TensorFlow-Examples/examples/3_NeuralNetworks/convolutional_network.py",
  "datasetMounts": [
    {
      "containerMountPoint": "/data",
      "id": 4597
    }
  ],
  "resultContainerMountPoint": "/result",
  "aceInstance": "ngcv1"
}

3. Submit job using json file.

$ ngc base-command job run -f tensorflow-mnist.json

How to submit job using json file and arguments?

Note: Single-use arguments will override and multiple-use arguments will be appended to corresponding json values.

Submit job with a new name and instance overriding the values from the json file.

$ ngc base-command job run -f tensorflow-mnist.json --name tensorflow-mnist-2 --instance dgx1v.16g.2.norm
--------------------------------------------------
  Job Information
    Id: 1181406
    Name: tensorflow-mnist-2
    Number of Replicas: 1
    Job Type: BATCH
    Submitted By: <your name>
  Job Container Information
    Docker Image URL: nvidia/tensorflow:17.10
  Job Commands
    Command: cp -r /data /tmp/; cd /tmp/data; unzip -j mnist.zip; cd /src/TensorFlow-Examples/examples/3_NeuralNetworks; cp /tmp/data/input_data.py .; python /src/TensorFlow-Examples/examples/3_NeuralNetworks/convolutional_network.py
    Dockerfile Image Entrypoint: False
  Datasets, Workspaces and Results
    Dataset ID: 5586
        Dataset Mount Point: /data
        Prepopulated: No
    Result Mount Point: /result
  Job Resources
    Instance Type: dgx1v.16g.2.norm
    Instance Details: 2 GPU, 16 CPU, 100 GB System Memory
    ACE: nv-exmpl-1
    Cluster: prd0-257-sjc3.nvk8s.com
    Team: ngc
  Job Status
    Created at: 2020-05-15 19:59:52 UTC
    Status: CREATED
    Preempt Class: RUNONCE
--------------------------------------------------

How to kill job?

$ ngc base-command job kill 19556

Submitted kill request for job Id: 19556

How to list jobs?

$ ngc base-command job list

+--------+-------------+-----------+------------------+----------+--------------------------+
| Id     | Name        | Team      |Status            | Duration | Status Details           |
+--------+----- -------+-----------|------------------+----------+--------------------------+
| 123456 | job-name-4  |           | FINISHED_SUCCESS | 0:00:30  |                          |
| 123455 | job-name-3  |           | FINISHED_SUCCESS | 0:00:00  |                          |
| 314169 | job-name-2  | your_team | KILLED_BY_USER   | -        | (-1): Requested by user. |
| 314172 | job-name-1  | my+team   | FAILED           | 0:00:01  | (100): Container exited  |
|        |             |           |                  |          | with status 1.           |
+--------+-------------+-----------+------------------+----------+--------------------------+

How to list jobs using column arguments?

$ ngc base-command job list --column name --column status

+--------+------------+------------------+
| Id     | Name       |Status            |
+--------+----- ------|------------------+
| 123456 | job-name-4 | FINISHED_SUCCESS |
| 123455 | job-name-3 | FINISHED_SUCCESS |
| 314169 | job-name-2 | KILLED_BY_USER   |
| 314172 | job-name-1 | FAILED           |
+--------+------------+------------------+

How to view job info?

$ ngc base-command job info 710135
--------------------------------------------------
  Job Information
    Id: 710135
    Name: TestJobLumetta
    Job Type: BATCH
    Created At: 2019-06-20 20:38:35 UTC
    Submitted By: <your name>
  Job Container Information
    Docker Image URL: nvidia/pytorch:17.11
    Container name: c983092fb5755e946177f82bb01e2eb02021b9285697cd9cf06e8da0e8c382a0
  Job Commands
    Dockerfile Image Entrypoint: False
  Datasets, Workspaces and Results
    Result Mount Point: /result
  Job Resources
    Instance Type: ngcv1
    Instance Details: 1 GPU, 8 CPU, 50 GB System Memory
    ACE: Staging
    Cluster: stg-sjc3.nonprod-nvkong.com
  Job Status
    Started at: 2019-06-20 20:41:12 UTC
    Ended at: 2019-06-20 20:41:12 UTC
    Status: FINISHED_SUCCESS
    Status Type: OK
    Preempt Class: RUNONCE
--------------------------------------------------

How to attach to Docker container?

Note: Specified job must be running

$ ngc base-command job attach 15472

How to exec to Docker container?

Note: Specified job must be running

$ ngc base-command job exec 15472

How to generate a job with telemetry data?

Note: Specified image must have metrics turned on, or there will be no output for the telemetry command.

$ ngc base-command job run --name "test-run" --preempt RUNONCE --ace nv-exmpl-1
               --instance dgx1v.16g.2.norm --commandline "jupyter lab --ip=0.0.0.
               --allow-root --no-browser --NotebookApp.token=''
               --notebook-dir= --NotebookApp.allow_origin='*' & date;
               export TF_ENABLE_AUTO_MIXED_PRECISION=1;
               cd /mnt/democode/nvidia/tensorflow-19.05-py3/workspace/nvidia-examples/cnn;
               mpiexec --allow-run-as-root -np 1 python resnet.py --layers=50
               --data_dir=/data/imagenet --log_dir=/result; " --result /result
               --image "nvidia/tensorflow:19.11-tf1-py3" --org nvexmpl --team onboarding
               --datasetid 9382:/data/imagenet
               --workspace wWL-MpYfSkmWcSjvZFwYwQ:/mnt/democode:RO --port 8888


Job created. (2-GPU configuration: "--instance dgx1v.16g.2.norm")
-------------------------------------------------------------------------------------------
Job Information
  Id: 1120624
  Name: test-run
  Number of Replicas: 1
  Job Type: BATCH
  Created At: 2020-04-07 05:38:38 UTC
  Submitted By: <your name>
Job Container Information
  Docker Image URL: nvidia/tensorflow:19.11-tf1-py3
Job Commands
  Command: jupyter lab --ip=0.0.0.0 --allow-root --no-browser --NotebookApp.token=''
           --notebook-dir=/ --NotebookApp.allow_origin='*' & date;
           export TF_ENABLE_AUTO_MIXED_PRECISION=1;
           cd /mnt/democode/nvidia/tensorflow-19.05-py3/workspace/nvidia-examples/cnn;
           mpiexec --allow-run-as-root -np 1 python resnet.py --layers=50
           --data_dir=/data/imagenet --log_dir=/result;
  Dockerfile Image Entrypoint: False
Datasets, Workspaces and Results
  Dataset ID: 9382
      Dataset Mount Point: /data/imagenet
      Prepopulated: No
  Workspace ID: wWL-MpYfSkmWcSjvZFwYwQ
      Workspace Name: sn-code
      Workspace Mount Point: /mnt/democode
      Workspace Mount Mode: RO
  Result Mount Point: /result
Job Resources
  Instance Type: dgx1v.16g.2.norm
  Instance Details: 2 GPU, 8 CPU, 50 GB System Memory
  ACE: nv-exmpl-1
  Cluster: prd0-257-sjc3.nvk8s.com
  Team: onboarding
Job Status
  Status: CREATED
  Preempt Class: RUNONCE
-------------------------------------------------------------------------------------------

How to get telemetry arguments information?

$ ngc base-command job telemetry --help

usage: ngc batch telemetry [--ace <name>] [--debug] [--format_type <fmt>]
                           [--interval-time <t>] [--interval-unit <u>]
                           [--org <name>] [--statistics <form>]
                           [--team <name>] [--type <type>] [-h]
                           <<job id>[:replica_id]>

List telemetry data for the given job.

positional arguments:
  <<job id>[:replica_id]>
                        Job ID

optional arguments:
  -h, --help            Show this help message and exit.
  --ace <name>          Specify the ACE name. Use "--ace no-ace" to override
                        other sources and specify no ACE. Default: current
                        configuration
  --debug               Enable debug mode.
  --format_type <fmt>   Specify the output format type. Supported formats are:
                        ascii, csv, json. Only commands that produce tabular
                        data support csv format. Default: ascii
  --interval-time <t>   Data collection interval time value. Default: 1
  --interval-unit <u>   Data collection interval unit. Options: HOUR, MINUTE,
                        SECOND. Default: MINUTE
  --org <name>          Specify the organization name. Use "--org no-org" to
                        override other sources and specify no org. Default:
                        current configuration
  --statistics <form>   Statistical form of the data to report. Options: MAX,
                        MEAN, MIN. Default: MEAN
  --team <name>         Specify the team name. Use "--team no-team" to
                        override other sources and specify no team. Default:
                        current configuration
  --type <type>         A telemetry type to report. Options:
                        APPLICATION_TELEMETRY, CPU_UTILIZATION, GPU_FB_USED,
                        GPU_FI_PROF_DRAM_ACTIVE, GPU_FI_PROF_PCIE_RX_BYTES,
                        GPU_FI_PROF_PCIE_TX_BYTES,
                        GPU_FI_PROF_PIPE_TENSOR_ACTIVE,
                        GPU_NVLINK_BANDWIDTH_TOTAL, GPU_POWER_USAGE,
                        GPU_UTILIZATION, MEM_UTILIZATION. Default: None

How to view default telemetry?

Note: This case shows a 2-GPU configuration (--instance dgx1v.16g.2.norm)

$ ngc base-command job telemetry --interval-time 90 --interval-unit HOUR 1120624
+-----------------------------------+----------------------+-----------------------+
| Name                              | Time                 | Measurement           |
+-----------------------------------+----------------------+-----------------------+
| ngcjob_appmetrics_job_rate        | 2020-05-04T06:00:00Z | 351.479144517938      |
| ngcjob_appmetrics_job_rate        | 2020-05-08T00:00:00Z | 372.21862476360195    |
| ngcjob_appmetrics_learn_rate      | 2020-05-04T06:00:00Z | 0.7806435847527863    |
| ngcjob_appmetrics_learn_rate      | 2020-05-08T00:00:00Z | 0.019446378715977816  |
| ngcjob_appmetrics_num_epochs      | 2020-05-04T06:00:00Z | 38.453847984087396    |
| ngcjob_appmetrics_num_epochs      | 2020-05-08T00:00:00Z | 81.39723414001247     |
| ngcjob_appmetrics_loss            | 2020-05-04T06:00:00Z | 2.6386090555393493    |
| ngcjob_appmetrics_loss            | 2020-05-08T00:00:00Z | 1.356071743548495     |
| ngcjob_appmetrics_total_loss      | 2020-05-04T06:00:00Z | 4.068851139706283     |
| ngcjob_appmetrics_total_loss      | 2020-05-08T00:00:00Z | 1.9573122789348445    |
| GPU_UTILIZATION                   | 2020-05-04T06:00:00Z | 18.253698644530818    |
| GPU_UTILIZATION                   | 2020-05-08T00:00:00Z | 19.78424418604651     |
| GPU_UTILIZATION_gpu_5             | 2020-05-04T06:00:00Z | 36.45377190221379     |
| GPU_UTILIZATION_gpu_5             | 2020-05-08T00:00:00Z | 39.70700116686115     |
| GPU_UTILIZATION_gpu_6             | 2020-05-04T06:00:00Z | 0.0                   |
| GPU_UTILIZATION_gpu_6             | 2020-05-08T00:00:00Z | 0.0                   |
| GPU_FI_PROF_PIPE_TENSOR_ACTIVE    | 2020-05-04T06:00:00Z | 5.0732140045141945    |
| GPU_FI_PROF_PIPE_TENSOR_ACTIVE    | 2020-05-08T00:00:00Z | 5.531902552204171     |
| GPU_FI_PROF_PIPE_TENSOR_ACTIVE_gp | 2020-05-04T06:00:00Z | 10.155490175475455    |
| u_5                               |                      |                       |
| GPU_FI_PROF_PIPE_TENSOR_ACTIVE_gp | 2020-05-08T00:00:00Z | 11.076655052264798    |
| u_5                               |                      |                       |
| GPU_FI_PROF_PIPE_TENSOR_ACTIVE_gp | 2020-05-04T06:00:00Z | 0.0                   |
| u_6                               |                      |                       |
| GPU_FI_PROF_PIPE_TENSOR_ACTIVE_gp | 2020-05-08T00:00:00Z | 0.0                   |
| u_6                               |                      |                       |
| GPU_FI_PROF_DRAM_ACTIVE           | 2020-05-04T06:00:00Z | 8.480215505913197     |
| GPU_FI_PROF_DRAM_ACTIVE           | 2020-05-08T00:00:00Z | 9.223577235772378     |
| GPU_FI_PROF_DRAM_ACTIVE_gpu_5     | 2020-05-04T06:00:00Z | 16.963105877404956    |
| GPU_FI_PROF_DRAM_ACTIVE_gpu_5     | 2020-05-08T00:00:00Z | 18.46860465116283     |
| GPU_FI_PROF_DRAM_ACTIVE_gpu_6     | 2020-05-04T06:00:00Z | 0.0                   |
| GPU_FI_PROF_DRAM_ACTIVE_gpu_6     | 2020-05-08T00:00:00Z | 0.0                   |
| GPU_POWER_USAGE                   | 2020-05-04T06:00:00Z | 70.64337745072271     |
| GPU_POWER_USAGE                   | 2020-05-08T00:00:00Z | 72.78912856311828     |
| GPU_POWER_USAGE_gpu_5             | 2020-05-04T06:00:00Z | 95.24901714526119     |
| GPU_POWER_USAGE_gpu_5             | 2020-05-08T00:00:00Z | 99.43907317073177     |
| GPU_POWER_USAGE_gpu_6             | 2020-05-04T06:00:00Z | 46.06617461651623     |
| GPU_POWER_USAGE_gpu_6             | 2020-05-08T00:00:00Z | 46.04600233100266     |
| GPU_FB_USED                       | 2020-05-04T06:00:00Z | 6253.323733445449     |
| GPU_FB_USED                       | 2020-05-08T00:00:00Z | 6276.2501454333915    |
| GPU_FB_USED_gpu_5                 | 2020-05-04T06:00:00Z | 12359.62638056169     |
| GPU_FB_USED_gpu_5                 | 2020-05-08T00:00:00Z | 12361.0               |
| GPU_FB_USED_gpu_6                 | 2020-05-04T06:00:00Z | 156.0                 |
| GPU_FB_USED_gpu_6                 | 2020-05-08T00:00:00Z | 156.0                 |
| GPU_FI_PROF_PCIE_RX_BYTES         | 2020-05-04T06:00:00Z | 5.681953140065174E8   |
| GPU_FI_PROF_PCIE_RX_BYTES         | 2020-05-08T00:00:00Z | 6.240817377829639E8   |
| GPU_FI_PROF_PCIE_RX_BYTES_gpu_5   | 2020-05-04T06:00:00Z | 1.1338959633084602E9  |
| GPU_FI_PROF_PCIE_RX_BYTES_gpu_5   | 2020-05-08T00:00:00Z | 1.242978021476135E9   |
| GPU_FI_PROF_PCIE_RX_BYTES_gpu_6   | 2020-05-04T06:00:00Z | 2256750.4371780045    |
| GPU_FI_PROF_PCIE_RX_BYTES_gpu_6   | 2020-05-08T00:00:00Z | 2290032.879532164     |
| GPU_FI_PROF_PCIE_TX_BYTES         | 2020-05-04T06:00:00Z | 7.030049576372905E7   |
| GPU_FI_PROF_PCIE_TX_BYTES         | 2020-05-08T00:00:00Z | 7.68041546759744E7    |
| GPU_FI_PROF_PCIE_TX_BYTES_gpu_5   | 2020-05-04T06:00:00Z | 1.3947316406226337E8  |
| GPU_FI_PROF_PCIE_TX_BYTES_gpu_5   | 2020-05-08T00:00:00Z | 1.5263490327272728E8  |
| GPU_FI_PROF_PCIE_TX_BYTES_gpu_6   | 2020-05-04T06:00:00Z | 1222276.019745825     |
| GPU_FI_PROF_PCIE_TX_BYTES_gpu_6   | 2020-05-08T00:00:00Z | 1237624.7154471544    |
| GPU_NVLINK_BANDWIDTH_TOTAL        | 2020-05-04T06:00:00Z | 0.0                   |
| GPU_NVLINK_BANDWIDTH_TOTAL        | 2020-05-08T00:00:00Z | 0.0                   |
| GPU_NVLINK_BANDWIDTH_TOTAL_gpu_5  | 2020-05-04T06:00:00Z | 0.0                   |
| GPU_NVLINK_BANDWIDTH_TOTAL_gpu_5  | 2020-05-08T00:00:00Z | 0.0                   |
| GPU_NVLINK_BANDWIDTH_TOTAL_gpu_6  | 2020-05-04T06:00:00Z | 0.0                   |
| GPU_NVLINK_BANDWIDTH_TOTAL_gpu_6  | 2020-05-08T00:00:00Z | 0.0                   |
| CPU_UTILIZATION                   | 2020-05-08T00:00:00Z | 0.8535945500785372    |
| MEM_UTILIZATION                   | 2020-05-04T06:00:00Z | 1.0148062796099368E11 |
| MEM_UTILIZATION                   | 2020-05-08T00:00:00Z | 9.146897689060422E10  |
+-----------------------------------+----------------------+-----------------------+

How to select the needed telemetry types?

$ ngc base-command job telemetry --interval-time 90 --interval-unit HOUR --type GPU_POWER_USAGE
  --type APPLICATION_TELEMETRY 1120624
+-----------------------------------+----------------------+-----------------------+
| Name                              | Time                 | Measurement           |
+-----------------------------------+----------------------+-----------------------+
| GPU_UTILIZATION                   | 2020-05-04T06:00:00Z | 18.253698644530818    |
| GPU_UTILIZATION                   | 2020-05-08T00:00:00Z | 19.78424418604651     |
| GPU_UTILIZATION_gpu_5             | 2020-05-04T06:00:00Z | 36.45377190221379     |
| GPU_UTILIZATION_gpu_5             | 2020-05-08T00:00:00Z | 39.70700116686115     |
| GPU_UTILIZATION_gpu_6             | 2020-05-04T06:00:00Z | 0.0                   |
| GPU_UTILIZATION_gpu_6             | 2020-05-08T00:00:00Z | 0.0                   |
| ngcjob_appmetrics_job_rate        | 2020-05-04T06:00:00Z | 351.479144517938      |
| ngcjob_appmetrics_job_rate        | 2020-05-08T00:00:00Z | 372.21862476360195    |
| ngcjob_appmetrics_learn_rate      | 2020-05-04T06:00:00Z | 0.7806435847527863    |
| ngcjob_appmetrics_learn_rate      | 2020-05-08T00:00:00Z | 0.019446378715977816  |
| ngcjob_appmetrics_num_epochs      | 2020-05-04T06:00:00Z | 38.453847984087396    |
| ngcjob_appmetrics_num_epochs      | 2020-05-08T00:00:00Z | 81.39723414001247     |
| ngcjob_appmetrics_loss            | 2020-05-04T06:00:00Z | 2.6386090555393493    |
| ngcjob_appmetrics_loss            | 2020-05-08T00:00:00Z | 1.356071743548495     |
| ngcjob_appmetrics_total_loss      | 2020-05-04T06:00:00Z | 4.068851139706283     |
| ngcjob_appmetrics_total_loss      | 2020-05-08T00:00:00Z | 1.9573122789348445    |
+-----------------------------------+----------------------+-----------------------+

How to get telemetry csv format data?

$ ngc base-command job telemetry --interval-time 90 --interval-unit HOUR --format_type csv 1120624

  Time,App Metrics:job_rate,App Metrics:learn_rate,App Metrics:num_epochs,App Metrics:loss,App Metrics:total_loss,GPU_UTILIZATION,GPU_UTILIZATION_gpu_5,GPU_UTILIZATION_gpu_6,GPU_FI_PROF_PIPE_TENSOR_ACTIVE,GPU_FI_PROF_PIPE_TENSOR_ACTIVE_gpu_5,GPU_FI_PROF_PIPE_TENSOR_ACTIVE_gpu_6,GPU_FI_PROF_DRAM_ACTIVE,GPU_FI_PROF_DRAM_ACTIVE_gpu_5,GPU_FI_PROF_DRAM_ACTIVE_gpu_6,GPU_POWER_USAGE,GPU_POWER_USAGE_gpu_5,GPU_POWER_USAGE_gpu_6,GPU_FB_USED,GPU_FB_USED_gpu_5,GPU_FB_USED_gpu_6,GPU_FI_PROF_PCIE_RX_BYTES,GPU_FI_PROF_PCIE_RX_BYTES_gpu_5,GPU_FI_PROF_PCIE_RX_BYTES_gpu_6,GPU_FI_PROF_PCIE_TX_BYTES,GPU_FI_PROF_PCIE_TX_BYTES_gpu_5,GPU_FI_PROF_PCIE_TX_BYTES_gpu_6,GPU_NVLINK_BANDWIDTH_TOTAL,GPU_NVLINK_BANDWIDTH_TOTAL_gpu_5,GPU_NVLINK_BANDWIDTH_TOTAL_gpu_6,CPU_UTILIZATION,MEM_UTILIZATION
  2020-05-04T06:00:00Z,351.479144517938,0.7806435847527863,38.453847984087396,2.6386090555393493,4.068851139706283,18.253698644530818,36.45377190221379,0.0,5.0732140045141945,10.155490175475455,0.0,8.480215505913197,16.963105877404956,0.0,70.64337745072271,95.24901714526119,46.06617461651623,6253.323733445449,12359.62638056169,156.0,5.681953140065174E8,1.1338959633084602E9,2256750.4371780045,7.030049576372905E7,1.3947316406226337E8,1222276.019745825,0.0,0.0,0.0,,1.0148062796099368E11
  2020-05-08T00:00:00Z,373.13010860893604,0.018556020195032785,81.65085542400061,1.341255228094049,1.9361629716369715,19.79727223131477,39.70284463894965,0.0,5.537745098039208,11.087568157033791,0.0,9.228462377317365,18.477074235807915,0.0,72.7692570960701,99.37005337690641,46.052045951860336,6268.487725040916,12361.0,156.0,6.252954906637459E8,1.2455773122448087E9,2290148.461031833,7.701736719650654E7,1.5296292707322404E8,1237446.4907306435,0.0,0.0,0.0,0.8583701189035325,9.224660433413626E10

How to get result info?

$ ngc base-command result info 9999

How to remove result from job?

$ ngc base-command result remove 9999

How to download result?

$ ngc base-command result download 9999

How to create workspaces?

$ ngc base-command workspace create --ace nvidia-exmpl-1

Successfully created workspace with ID: 'e9UJrwAPTSmOQCxA'
  Workspace Information
      ID: e9UJrwAPTSmOQCxA
      Name: None
      Created By: ngccli
      Size: 0 B
      ACE: nvidia-exmpl-1
      Org: robot
      Description:
      Shared with: None

How to list workspaces?

$ ngc base-command workspace list

+------------------+------------------+------------------+----------------+------------------+--------+--------------+-------+----------+
| Id               | Name             | Description      | ACE            | Creator Username | Shared | Created Date | Owned | Size     |
+------------------+------------------+------------------+----------------+------------------+--------+--------------+-------+----------+
| e9UJrwAPTSmOQCxA |                  |                  | nvidia-exmpl-1 | Amy Smith        | Yes    | 2021-02-02   | No    | 14.01 KB |
|                  |                  |                  |                |                  |        | 11:38:23 UTC |       |          |
| cizRgbYrQp7nFo   | workspace-test   |                  | nvidia-exmpl-1 | Bill Williams    | Yes    | 2021-02-19   | No    | 0 B      |
|                  |                  |                  |                |                  |        | 01:11:25 UTC |       |          |
| jHJx5rBoT3uTe-C8 | wkspc-example    |                  | nvidia-exmpl-2 | Chad Brown       | Yes    | 2020-06-04   | No    | 2.67 GB  |
|                  |                  |                  |                |                  |        | 10:09:25 UTC |       |          |
+------------------+------------------+------------------+----------------+------------------+--------+--------------+-------+----------+

How to list workspaces using column arguments?

$ ngc base-command workspace list --column ace --column 'updated=Date'

+------------------------+----------------+-------------------------+
| Id                     | Ace            | Date                    |
+------------------------+----------------+-------------------------+
| e9UJrwAPTSmOQCxA       | nvidia-exmpl-1 | 2021-03-18 13:48:36 UTC |
| cizRgbYrQp7nFo         | nvidia-exmpl-1 | 2021-02-25 12:33:09 UTC |
| jHJx5rBoT3uTe-C8       | nvidia-exmpl-2 | 2020-06-04 19:42:45 UTC |
+------------------------+----------------+-------------------------+

How to get workspace info?

$ ngc base-command workspace info workspace1

----------------------------------------------------
  Workspace Information
    ID: nwJF12QPQvKkPm5aIBTTkg
    Name: workspace1
    ACE: Staging
    Org: nvidia
    Description: workspace testing
    Shared with: nvidia, nvidia/a10, nvidia/a11, nvidia/workspace_test
----------------------------------------------------