NVIDIA Nsight Operator enables your containerized applications to be profiled by NVIDIA Nsight tools (currently, only using Nsight Systems). This solution leverages a Kubernetes dynamic admission controller to inject an init container, volumes with the NVIDIA Nsight tools and its configurations, environment variables, and a security context upon the creation or update of your Pod.
admissionregistration.k8s.io/v1
API enabled. Verify that by running
the following command:kubectl api-versions | grep admissionregistration.k8s.io/v1
The result should be:
admissionregistration.k8s.io/v1
Note: Additionally, the
MutatingAdmissionWebhook
andValidatingAdmissionWebhook
admission controllers should be added and listed in the correct order in the admission-control flag of kube-apiserver. Please refer to the Kubernetes documentation. It is likely that this is set by default if your cluster is running on EKS, AKS, OKE or GKE.
helm install \
nsight-operator https://helm.ngc.nvidia.com/nvidia/devtools/charts/nsight-operator-1.1.0.tgz
The default installation will activate profiling for all applications within Pods that have either their namespace or Pod itself labeled with "nvidia-nsight-profile: enabled".
The NVIDIA Nsight Operator can be customized to suit particular needs. Likely, you will need to configure the profile.devtoolArgs, profile.injectionIncludePatterns, profile.injectionExcludePatterns, profile.volumes, and profile.volumeMounts values. A values file can be used for setting these parameters.
Sample custom_values.yaml
. This configuration will enable profiling for any instance of yourawesomeapp
found in
injection Pods and limit collection duration to 20 seconds.
# Nsight Systems profiling configuration
profile:
# The arguments for the Nsight Systems. The placeholders will be replaced with the actual values.
# Limit collection duration to 20 seconds
devtoolArgs: "profile --duration 20 --kill none -o /home/auto_{NVDT_PROCESS_NAME}_%{NVDT_POD_FULLNAME}_%{NVDT_CONTAINER_NAME}_{NVDT_TIMESTAMP}_{NVDT_UID}.nsys-rep"
# The regex to match applications to profile.
injectionIncludePatterns: ".*yourawesomeapp.*"
helm install -f custom_values.yaml \
nsight-operator https://helm.ngc.nvidia.com/nvidia/devtools/charts/nsight-operator-1.1.0.tgz
Sample custom_values_launch.yaml
. This configuration will inject Nsight Systems for later profiling for any
instance of yourawesomeapp
found in injection Pods. nsys_k8s.py
can be used further to start/stop collection.
# Nsight Systems profiling configuration
profile:
# The arguments for the Nsight Systems. The placeholders will be replaced with the actual values.
devtoolArgs: "launch"
# The regex to match applications to profile.
injectionIncludePatterns: ".*yourawesomeapp.*"
Sample custom_values_extended.yaml
: This configuration enables profiling for any instance of
yourawesomeapp
running in injected Pods, except for those started with the argumenttoskip
argument. Profiling is configured to collect data for a maximum duration of 20 seconds.
The nsys-output-volume
will be mounted to all profiled Pods. A Persistent Volume Claim must be
available in the target namespaces for successful operation. Additionally,
kernel.perf_event_paranoid
will be set to -1
on all nodes where profiling is performed.
# Nsight Systems profiling configuration
profile:
# A volume to store profiling results. It can be omitted, but in this case, the results will be lost after the pod
# deletion and they will not be in the common location.
# You may skip this section if you already have a shared volume mounted for all the profiling pods.
volumes:
[
{
"name": "nsys-output-volume",
"persistentVolumeClaim": { "claimName": "CSP-managed-disk" },
},
]
volumeMounts:
[{ "name": "nsys-output-volume", "mountPath": "/mnt/nsys/output" }]
# The arguments for the Nsight Systems. The placeholders will be replaced with the actual values.
devtoolArgs: "profile --duration 20 --kill none -o /mnt/nsys/output/auto_{NVDT_PROCESS_NAME}_%{NVDT_POD_FULLNAME}_%{NVDT_CONTAINER_NAME}_{NVDT_TIMESTAMP}_{NVDT_UID}.nsys-rep"
# The regex to match applications to profile.
injectionIncludePatterns: ".*yourawesomeapp.*"
injectionExcludePatterns: ".*yourawesomeapp.*argumenttoskip.*"
# Node configurations which should be performed. Currently, only kernel.perf_event_paranoid is supported.
machineConfig:
- name: kernel.perf_event_paranoid
value: -1
Variable | Description | Default value |
---|---|---|
profile.devtoolArgs | The parameters for Nsight Systems used during profiling are detailed in the Nsight Systems User Guide. A comprehensive list of available parameters is provided there. Placeholders within these parameters will be substituted with their actual values during execution. It is recommended to include {NVDT_TIMESTAMP} and {NVDT_UID} placeholders in the output file name to keep filenames unique. Otherwise, the report may be overwritten or not generated at all. Example: profile -o /mnt/nsys/output/auto_{NVDT_PROCESS_NAME}%{NVDT_POD_FULLNAME}%{NVDT_CONTAINER_NAME}{NVDT_TIMESTAMP}{NVDT_UID}.nsys-rep | |
profile.injectionIncludePatterns | Regex patterns that specify which processes or commands in the container should be profiled. | [.* ] |
profile.injectionExcludePatterns | Regex patterns that specify which processes or commands in the container should NOT be profiled. | [".\bbash( |$).", ".\bsh( |$).", ".\bzsh( |$).", ".\bdash( |$).", ".\bnsys( |$)."] |
profile.volumes | Additional volumes that will be injected into profiled containers. Can be useful for storing profiling results. | |
profile.volumeMounts | Volume mounts that will be injected into profiled containers. Can be useful for storing profiling results. | |
profile.env | Environment variables to inject in the profiled container. Each item must have name and value fields. |
|
profile.enableDefault | Should the default (included in setup) profiling configuration be enabled? | true |
privileged | Enables profiled containers to be run in privileged mode (can be used to collect GPU metrics). | None |
capabilities | Enables profiled containers to be run with specific capabilities (for instance SYS_ADMIN can be used to collect GPU metrics) | None |
machineConfig | Array of name/value pairs (system configurations) which should be updated before profiling on target nodes (currently, only kernel.perf_event_paranoid is supported). More info about kernel.perf_event_paranoid. To prevent the NVIDIA Nsight Operator from updating node configurations, machineConfig: null in the custom_values.yaml file. | [{ name: kernel.perf_event_paranoid, value: 2 }] |
Placeholder | Replacement |
---|---|
{NVDT_UID} |
The random alphanumeric string (8 symbols) |
{NVDT_PROCESS_NAME} |
The profiled process name. |
{NVDT_PROCESS_ID} |
The profiled process id |
{NVDT_TIMESTAMP} |
The UNIX timestamp (in ms) |
%{ANY ENVIRONMENT VARIABLE} |
The "ANY ENVIRONMENT VARIABLE" environment variable inside a container. NVDT_POD_FULLNAME and NVDT_CONTAINER_NAME environment variables are set by the NVIDIA Nsight Operator |
To enable automatic injection for all Pods in a namespace, add the nvidia-nsight-profile=enabled
label to the namespace.
kubectl label namespaces <namespace name> nvidia-nsight-profile=enabled
To enable automatic injection for a specific resource in a namespace, add the
nvidia-nsight-profile=enabled
label to the resource.
kubectl label <resource_type> <pod-name> nvidia-nsight-profile=enabled
At this point, any new pod will be considered for injection based on labels and injectionIncludePatterns
An already started pod cannot be injected. Instead you must restart the pod, to support profiling.
By the same token if you remove the label or set the Pod label to disabled
, you will need to restart them to remove
the injection.
Resource with more than one replica
kubectl rollout restart <resource type>/<resource name>
For example:
kubectl rollout restart deployment/amazing_service
Resource with only one replica
kubectl scale <resource type>/<resource name> --replicas=0
kubectl scale <resource type>/<resource name> --replicas=1
For example:
kubectl scale deployment/amazing_service --replicas=0
kubectl scale deployment/amazing_service --replicas=1
In Kubernetes environments, managing sidecar injection and profiling configurations can be challenging, particularly in dynamic scenarios where Pods are created by custom resources or controllers. The process requires more than just filtering Pods — it requires selecting the appropriate configuration for each Pod, application, or namespace. While labels offer a basic level of control, they often lack the granularity required for precise targeting and configuration. To address these challenges, the NVIDIA Nsight Operator provides advanced mechanisms for filtering Pods and applying tailored configurations, enabling effective management and optimization of complex and dynamic environments.
NVIDIA Nsight Operator supports the following mechanisms for filtering and targeting Pods for injection:
Below is a sample custom_values_fine_grained.yaml
configuration demonstrating the use of these mechanisms for fine-grained injection control.
# Disable the default configuration
profile:
enableDefault: false
injectionProfileConfig:
defaultProfileRef: "triton-profile"
profiles:
- name: "triton-profile"
devtoolArgs: "profile --duration 20 --kill none -o /home/auto_{NVDT_PROCESS_NAME}_${NVDT_POD_FULLNAME}_${NVDT_CONTAINER_NAME}_{NVDT_TIMESTAMP}_{NVDT_UID}.nsys-rep"
injectionIncludePatterns:
- "^/opt/tritonserver/bin/tritonserver.*$"
- name: "other-profile"
devtoolArgs: "profile --duration 30 --kill none -o /home/auto_{NVDT_PROCESS_NAME}_${NVDT_POD_FULLNAME}_${NVDT_CONTAINER_NAME}_{NVDT_TIMESTAMP}_{NVDT_UID}.nsys-rep"
injectionIncludePatterns:
- "^python MaxText/train.py.*$"
env:
- name: NSYS_NVTX_PROFILER_REGISTER_ONLY
value: "0"
injectionConfigs:
- name: "has-injection-label-or-demo-injection-name"
matchConditions:
- name: "has-injection-label-or-demo-injection-name"
expression: >
((has(object.metadata.labels) &&
'nvidia-nsight-profile' in object.metadata.labels &&
object.metadata.labels['nvidia-nsight-profile'] == 'enabled') ||
object.metadata.name.contains('demo-injection'))
- name: "train-injection"
profileRef: "other-profile"
matchConditions:
- name: "fine-grained"
expression: |
(
object.metadata.generateName.startsWith("example-deployment-name-") &&
object.metadata.namespace == "example-ns"
) ||
(
object.metadata.ownerReferences.exists(ref, ref.kind == "DaemonSet" &&
ref.name == "example-daemonset")
)
- name: "namespace-selector-filter"
profileRef: "other-profile"
namespaceSelector:
matchLabels:
custom-injection-label: enabled
- name: "object-selector-filter"
profileRef: "other-profile"
objectSelector:
matchLabels:
custom-injection-label: enabled
- name: "combined-filter"
profileRef: "other-profile"
namespaceSelector:
matchLabels:
combined-custom-injection-label: enabled
objectSelector:
matchLabels:
combined-custom-injection-label: enabled
matchConditions:
- name: "combined"
expression: 'object.metadata.name.startsWith("example-pod-prefix-")'
The above configuration customizes profiling parameters for different applications and Pods based on their metadata.
/opt/tritonserver/bin/tritonserver
processes innvidia-nsight-profile=enabled
label or Pods with the demo-injection
in their name.python MaxText/train.py
processes inexample-deployment-name-
in the example-ns
namespace or Pods owned by the example-daemonset
custom-injection-label=enabled
labelcustom-injection-label=enabled
labelcombined-custom-injection-label=enabled
and the Pod label combined-custom-injection-label=enabled
and the Pod name starting with example-pod-prefix-
NVIDIA Nsight Operator supports multi-tenant environments where different teams or users require separate configurations. NVIDIA Nsight Operator can be configured to apply different profiles and injection rules based on the namespace or Pod name. Below is a sample custom_values_multi_tenant.yaml
configuration demonstrating the use of profiles and injection rules for multi-tenant environments. It activates possibility of profiling (profiling is still not enabled after installing) in all namespaces with the nvidia-nsight-profile=enabled
label:
# Disable the default configuration
profile:
enableDefault: false
clusterWideInjectionFilter:
matchConditions:
- name: "is-pod"
expression: "object.kind == 'Pod'"
namespaceSelector:
matchExpressions:
- key: kubernetes.io/metadata.name
operator: "NotIn"
values:
- kube-system
- kube-node-lease
- kube-public
- key: nvidia-nsight-profile
operator: "In"
values:
- enabled
To enable profiling in a specific namespace, the user of this namespace should add the NsightOperatorProfileConfig
resource with the profiling configuration content. The spec
can include all subvalues of profile
or injectionProfileConfig
(see Advanced configuration values) values supported by the installation configuration. The sample custom_installation_injection_config.yaml
configuration (can be deployed by the kubectl apply -n example-ns -f custom_installation_injection_config.yaml
command) demonstrates how to enable profiling in a specific namespace:
apiVersion: nvidia.com/v1
kind: NsightOperatorProfileConfig
metadata:
name: custom-profile-config
spec:
defaultProfileRef: "update-profile"
profiles:
- name: "update-profile"
devtoolArgs: "profile --duration 2 --kill none -o /home/separate_auto_{NVDT_PROCESS_NAME}_%{NVDT_POD_FULLNAME}_%{NVDT_CONTAINER_NAME}_{NVDT_TIMESTAMP}_{NVDT_UID}.nsys-rep"
injectionIncludePatterns:
- "^/cuda-samples/vectorAdd_forever.*$"
logOutput: /mnt/nv/out.log
injectionConfigs:
- name: "has-injection-label-or-demo-injection-name"
disabled: true
- name: "starts-with-name"
matchConditions:
- name: "starts-with-name"
expression: >
(has(object.metadata.generateName) &&
object.metadata.generateName.contains('cuda-vector-add-forever'))
The configuration above enables profiling for 2 seconds for all the /cuda-samples/vectorAdd_forever
processes in all the Pods with the generated name containing cuda-vector-add-forever
. It also disables the "has-injection-label-or-demo-injection-name" injection configuration (if it was specified in the default cluster-wide configuration or any other NsightOperatorProfileConfig
in the Pod's namespace).
There can be multiple NsightOperatorProfileConfig
resources in a Pod's namespace. NVIDIA Nsight Operator will apply all the configuration from the NsightOperatorProfileConfig
. Injection configurations cannot have the same name in the same namespace (the only situation when the same name is allowed is when the configuration is disabled).
Variable | Type / Structure | Description |
---|---|---|
defaultProfileRef | string | The default profile name from the profiles section will be used if the injectionConfig in the current configuration file does not explicitly specify a profile name to use. |
profiles | list of objects | Defines one or more "profiles". Each profile describes how the Nsight tool injection should be performed (e.g., command-line arguments, environment variables). |
profiles[].name | string | A unique name identifying the profile. |
profiles[].devtoolArgs | string | The parameters for Nsight Systems used during profiling are detailed in the Nsight Systems User Guide. A comprehensive list of available parameters is provided there. Placeholders within these parameters will be substituted with their actual values during execution. It is recommended to include {NVDT_TIMESTAMP} and {NVDT_UID} placeholders in the output file name to keep filenames unique. Otherwise, the report may be overwritten or not generated at all. Example: profile -o /mnt/nsys/output/auto_{NVDT_PROCESS_NAME}%{NVDT_POD_FULLNAME}%{NVDT_CONTAINER_NAME}{NVDT_TIMESTAMP}{NVDT_UID}.nsys-rep |
profiles[].injectionIncludePatterns | list of strings (regex patterns) | Regex patterns that specify which processes or commands in the container should be profiled. |
profiles[].injectionExcludePatterns | list of strings (regex patterns) | Regex patterns that specify which processes or commands in the container should NOT be profiled. |
profiles[].env | list of key/value pairs | Environment variables to inject in the profiled container. Each item must have name and value fields. |
profiles[].volumes | list of key/value pairs | Additional volumes that will be injected into profiled containers. Can be useful for storing profiling results. |
profiles[].volumeMounts | list of key/value pairs | Volume mounts that will be injected into profiled containers. Can be useful for storing profiling results. |
profiles[].logOutput | list of key/value pairs | Configures the logging output for the library. Can be stdout , stderr or a file path. By default, logging is disabled. |
injectionConfigs | list of objects | A list of rules that determines which Pods should receive the injection and profiling. |
injectionConfigs[].name | string | A unique name identifying this set of injection rules. The only situation in which the same name is allowed is when all configurations with the same name, except at most one, contain only the disabled field. All the conditions (matchConditions, namespaceSelector, objectSelector), if specified, within a single injection configuration must be true for the injection configuration to be considered a match. |
injectionConfigs[].profileRef | string | The name of a specific profile to use if this injection config matches a given Pod. If omitted, the defaultProfileRef is used instead. |
injectionConfigs[].matchConditions | list of objects | A list of conditions to evaluate (using CEL expressions). If all conditions are met, the injection config is considered a match. |
injectionConfigs[].matchConditions[].name | string | A name for the match condition. |
injectionConfigs[].matchConditions[].expression | string (CEL expression) | A CEL-based expression that returns true if the Pod should be injected. |
injectionConfigs[].namespaceSelector | object | Label selector to match namespace labels. Only pods in namespaces that match these labels will be injected. |
injectionConfigs[].namespaceSelector.matchLabels | map of string:string | Key-value pairs that must be present on the namespace for the injection to occur. |
injectionConfigs[].objectSelector | object | Label selector to match pod labels. Only pods with labels matching these values will be injected. |
injectionConfigs[].objectSelector.matchLabels | map of string:string | Key-value pairs that must be present on the pod for the injection to occur. |
Profiling can be controlled using the nsys_k8s.py
script. The script can be found in
NVIDIA Nsight Operator Resources.
This script facilitates the execution of
Nsight Systems commands within profiled containers of
Kubernetes pods. Additionally, it provides a convenient method for downloading profiling result.
nsys_k8s
searches for Pods that are labeled for profiling and looks for active Nsight Systems sessions launched by
the NVIDIA Nsight Operator in them.
The script supports Pods filtering using
field selectors
pip install -r requirements.txt
)The script supports executing Nsight Systems commands within containers of Kubernetes pods, with optional filters for targeting specific namespaces, containers, and pods. Nsight Systems commands are executed only on pods that have active Nsight Systems sessions. The general command structure is as follows:
./nsys_k8s.py [--field-selector SELECTOR] nsys [nsys_arguments...]
Argument | Description |
---|---|
--field-selector |
(Optional) Filter Kubernetes objects to identify those on which an Nsight Systems command will be executed, based on the value(s) of one or more resource fields. Field selectors. |
nsys_arguments... |
Specify the Nsight Systems command and arguments you wish to execute. For example, start --sampling-frequency=5000. For commands which supports the --output argument, in case this argument is not present, the --output arguments will be generated based on profile.devtoolArgs Helm option value |
Do not specify the session name in nsys_arguments
- it will be obtained automatically.
download
commandThe script supports the download
command to provide a convenient way for downloading profiling results from profiled Pods.
./nsys_k8s.py [--field-selector SELECTOR] download [destination]
Argument | Description |
---|---|
--field-selector |
(Optional) Filter Kubernetes objects to identify those on which an Nsight Systems command will be executed, based on the value(s) of one or more resource fields. Field selectors. |
destination |
The path for the directory into which the profiling results will be downloaded. |
--remove-source |
(Optional) Delete source files from Pods after downloading them. |
check
commandThe script supports the check
command to provide a convenient way to check if a NVIDIA Nsight Operator
is injected into a specific Pod.
./nsys_k8s.py check [-n namespace] [pod]
Argument | Description |
---|---|
-n |
(Optional) The namespace of the Pod to check. |
pod |
The name of the Pod to check. |
NVIDIA Nsight Operator configurations can be modified after the installation. Please note, however, that the configuration of already injected Pods will not be updated until they are restarted.
helm upgrade -f custom_values.yaml \
nsight-operator https://helm.ngc.nvidia.com/nvidia/devtools/charts/nsight-operator-1.1.0.tgz
GPU Metrics Samples can only be collected by one process per GPU. The most straightforward way to avoid collisions is to collect GPU metrics from a single custom DaemonSet per node. The following resources configuration can be used to achieve that:
kubectl apply -f ./gpu_metrics_resources.yaml
gpu_metrics_daemonset.yaml
:
apiVersion: nvidia.com/v1
kind: NsightOperatorProfileConfig
metadata:
name: gpu-metrics-config
namespace: example-gpu-metrics-ns-with-label
spec:
defaultProfileRef: "gpu-metrics"
profiles:
- name: "gpu-metrics"
devtoolArgs: "profile --run-agent-in-process --gpu-metrics-devices=all -o /home/auto_gpu_metrics_%{NVDT_POD_FULLNAME}_{NVDT_TIMESTAMP}_{NVDT_UID}.nsys-rep"
injectionIncludePatterns:
- ".*sleep infinity.*"
injectionConfigs:
- name: "gpu-metrics"
matchConditions:
- name: "gpu-metrics"
expression: |
object.metadata.ownerReferences.exists(ref, ref.kind == "DaemonSet" &&
ref.name == "gpu-metrics-collector")
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: gpu-metrics-collector
namespace: example-gpu-metrics-ns-with-label
labels:
app: nvidia-nsight-profile
spec:
selector:
matchLabels:
app: nvidia-nsight-profile
template:
metadata:
labels:
app: nvidia-nsight-profile
spec:
runtimeClassName: nvidia
containers:
- name: gpu-metrics-ubuntu-container
image: nvidia/cuda:12.8.0-base-ubuntu24.04
command: ["sleep", "infinity"]
securityContext:
privileged: true
tolerations:
- effect: NoSchedule
key: nvidia.com/gpu
operator: Exists
The ConfigMap customizes profiling parameters (which ensure the GPU Metrics are collected) for the DaemonSet. Started
by this DaemonSet Pod will be controllable by the nsys_k8s.py
script.
Amazon AWS EFA Network Counters requires additional configuration to be sampled. The /sys/class/infiniband//ports/*/hw_counters/ directory is not mounted into a container by default, so it should be mounted into the container from the host machine.
Sample custom_values_efa_mount.yaml
with the required volumes:
profile:
# Files inside /sys/class/infiniband directory contain relative symbolic links to /sys/devices
volumes:
[
{
"name": "sys-class-infiniband",
"hostPath": { "path": "/sys/class/infiniband", "type": "Directory" }
},
{
"name": "sys-class-devices",
"hostPath": { "path": "/sys/devices", "type": "Directory" }
}
]
volumeMounts:
[
{
"name": "sys-class-infiniband",
"mountPath": "/mnt/nv/sys/class/infiniband",
"readOnly": true
},
{
"name": "sys-class-devices",
"mountPath": "/mnt/nv/sys/devices",
"readOnly": true
}
]
# Enable and configure the EFA metrics plugin to collect metrics from a non-default sysfs location.
devtoolArgs: "profile --enable efa_metrics,-efa-counters-sysfs=\"/mnt/nv/sys\" -o /home/auto_{NVDT_PROCESS_NAME}_%{NVDT_POD_FULLNAME}_%{NVDT_CONTAINER_NAME}_{NVDT_TIMESTAMP}_{NVDT_UID}.nsys-rep"
# The regex to match applications to profile.
injectionIncludePatterns: "^/usr/bin/python3 /usr/local/bin/torchrun.*$"
Perform the following steps to uninstall the NVIDIA Nsight Operator:
helm uninstall nsight-operator
This will automatically delete all the resources created by nsight-operator
and remove all the nvidia-nsight-profile
labels from all the labeled resources.
Additionally, you can delete only the labels from all resources labeled with nvidia-nsight-profile=enabled
to clean up the resources from injection:
kubectl get all --all-namespaces -l nvidia-nsight-profile=enabled -o custom-columns=:.metadata.name,NS:.metadata.namespace,KIND:.kind --no-headers | while read name namespace kind; do kubectl label $kind $name -n $namespace nvidia-nsight-profile-; done
Sometimes you may find that pod is injected with sidecar container as expected, check the following items:
nvidia-nsight-profile
in the nvidia-nsight-profile
namespace Pod is in running state
and no error logs have been produced../nsys_k8s.py check [-n namespace] [pod]
--gpu-metrics-devices
option. In that case, you can use a report from that
injection or modify the configurations to ensure only one Pod is running with the GPU metrics option.nvidia-dcgm-exporter
(documentation)
DaemonSet which collects GPU metrics. If you are not using it, you can temporarily disable it:kubectl -n gpu-operator-resources patch daemonset nvidia-dcgm-exporter -p '{"spec": {"template": {"spec": {"nodeSelector": {"non-existing": "true"}}}}}'
To enable it back, you can call the command:
kubectl -n gpu-operator-resources patch daemonset nvidia-dcgm-exporter --type json -p='[{"op": "remove", "path": "/spec/template/spec/nodeSelector/non-existing"}]'
Initialization of Pods with the profiler injected can be slower on OpenShift clusters during the first-time setup (post-configuration). This is due to the more complex mechanism required for node configuration, specifically the updating of kernel.perf_event_paranoid.