Openstack telemetry: Ceilometer, Gnocchi and Aodh

In Openstack Ceilometer is the component that gathers data from the cloud and pre-processes it. It distinguishes between samples (CPU time) and events (creation of an instance). Resources, Meters and Samples are fundamental concepts in Ceilometer.

Samples are retrieved at regular intervals and if Ceilometar fails to get the sample it can be estimated by interpolation. Events are retrieved as they happen and cannot be estimated.

Ceilometer sends events to the a storage service, while samples are sent to a service named Gnocchi, which is optimized to handle large amount of time-series data.

Aodh gets measures from Gnocchi, checks whether certain conditions are met and triggers actions. This is the foundation for application auto-scaling.

Other uses for Gnocchi data are monitoring the health of the cloud and billing.

Ceilometer has three ways retrieving samples and events:

  • Services may voluntarily provide them by sending Ceilometer notification via Openstack’s messaging system. This is preferred way since it is based on internal knowledge that the service has about it’s resources and it is fast without much overhead and stress on the systems.
  • Ceilometer actively retrieves data via APIs which is a costly method for billing and alarming.
  • Ceilometer can get data by accessing sub-components of services such as the hypervisor that run the instances.

Second and third method are referred also as methods where Ceilometer “polls” the samples.

More details on Openstack telemetry can be found on this link:

While Ceilometer has resources, meters and samples Gnocchi has resources, metrics and measures. Gnocchi resource corresponds to Ceilometer resource. Metric is roughly equivalent to a meter in Ceilometer. Gnocchi does not store every metric value it receives from Ceilomter, but rather it combines values and stores the results at regular intervals according to Archive policy.

Listing gnocchi resources, metrics, measures:
gnocchi resource list
gnocchi resource show UUID
gnocchi metric list*
gnocchi metric show cpu --resource UUID
gnocchi measures show cpu --resource UUID --start YYYY-MM-DDTHH:MM:SS+00:00
* Output will be empty for non-admin users.
Listing resources, metrics, measures with openstack client:
openstack metric resource list
openstack metric resource show UUID
openstack metric metric list*
openstack metric metric show cpu --resource UUID
openstack metric measures show cpu --resource UUID --start YYYY-MM-DDTHH:MM:SS+00:00
* Output will be empty for non-admin users.
Server grouping:
openstack server create --property metering.server_group=Mail*
Metrics aggregation:
gnocchi measures aggregation --query server_group=Mail --resource-type=instance --aggregation mean -m cpu_util
* For gnocchi all servers with ‘–property metering.server_group=Mail’ can be considered tagged.
Listing Ceilometer Events:
Event types are defined in a YAML type:
List event types, events and event details:
ceilometer event-type-list
ceilometer event-list
ceilometer event-show EVENT_ID
* For gnocchi all servers with ‘–property metering.server_group=Mail’ can be considered tagged.
** There is no option in horizon GUI to view events or statictics, but gnocchi visualisation can be provided by Grafana.
Example generating CPU and disk load and showing gnocchi measures:
Create two instances:
openstack server create --image cirros-image --flavor 1 --nic net-id=... --user-data cpu-user*
openstack server create --image cirros-image --flavor 1 --nic net-id=... --user-data disk-user*
Let the instances finish creating and leave them running for a while:
openstack server list
Show cpu usage measures from cpu-user server:
gnocchi measures show --resource-id SERVER_UUID
795 gnocchi measures show cpu_util --resource-id SERVER_UUID
Show disk usage measures from disk-user server:
gnocchi measures show --resource-id SERVER_UUID
gnocchi measures show --resource-id SERVER_UUID
* Files and are your bash scripts to generate CPU load and disk load.
** Don’t forget to stop cpu-server and disk-server after you have finished, since they will continue to generate CPU and disk load.
An alarm has:
Condition: depends on type
Evaluation window
State: OK/Alarm/Insufficient Data
Actions for state transitions
Condition example:
mean cpu_util > 60
all resources tagged server_group=Mail
Single Resource Threshold Alarm:
openstack alarm create --name cpuhigh \
--type gnocchi_resources_threshold \
--aggregation-method mean --metric cpu_util \
--comparison-operator gt --threshold 30 \
--resource-type instance \
--resource-id INSTANCE_UUID \
--granularity 60 --evaluation-periods 2 \
--alarm-action \
Alarm based on resource aggregates:
openstack alarm create --name cpuhigh \
--type gnocchi_aggregation_by_resources_threshold \
--aggregation-method mean --metric cpu_util \
--comparison-operator gt --threshold 30 \
--resource-type instance \
--query '{ "=": { "server_group" : "Mail" }}' \
--granularity 60 --evaluation-periods 2 \
--alarm-action \
Alarm commands:
openstack alarm list
openstack alarm show ALARM_ID
openstack alarm-history show ALARM_ID
openstack alarm state get ALARM_ID
openstack alarm update ALARM_ID ...

Openstack storage cookbook list of useful commands with examples

Here is the list of openstack storage commands and examples that I collected and found useful:

Openstack commands:Cinder commands:
Volume opearations:Cinder operations:
openstack volume create --size 1 [NAME]
openstack volume create --size 1 --image [IMAGE] [NAME]
openstack volume create --size 1 --source [VOLUME] [NAME]
openstack volume create --size 1 --snapshot [SNAPSHOT] [NAME]
openstack volume list
openstack volume show [VOLUME]
openstack volume set [VOLUME] --size 2
openstack volume set [VOLUME] --name [NEW_NAME]
openstack volume set [VOLUME] --property [KEY]=[VALUE]
openstack volume delete [VOLUME]*
cinder create 1
cinder create 1 --image [IMAGE]
cinder create 1 --source-volid [VOLUME_ID]
cinder create 1 --snapshot-id [SNAPSHOT_ID]
cinder list
cinder show [VOLUME]
cinder extend [VOLUME] 2
cinder rename [VOLUME] [NEW_NAME]
cinder metadata [VOLUME] set [KEY]=[VALUE]
cinder delete [VOLUME]*
Volumes can not be smaller than 1GB, that is why –size 1 is used.
* It is not posible to undo a delete operation
** Adding –type [VOLUME_TYPE] to create options
Volume types:
Volume operations:
openstack volume type create thin --property volume_backend_name=lvm --property lvm:provisioning=thin*
openstack volume types
openstack volume type show [NAME]
openstack volume create --type thin --size 1 [thinvol]
openstack volume list --long**
* In reality lvm backend does not have parameter provisioning, actually it has no parameters at all, but this “fake” property “thin” is used to show how types are used
** In order to see parameter type use the “–long” option
Openstack commands:Nova commands:
Attaching and detaching volumes:
openstack server add volume SERVER_REF VOLUME_REF
openstack server add volume --device /dev/vdc* SERVER_REF VOLUME_REF
openstack server remove volume SERVER_REF VOLUME_REF
nova volume-attach SERVER_REF VOLUME_ID [DEVICE]
nova volume-dettach SERVER_REF VOLUME_ID
Boot from a volume CLI commands:
openstack volume create --image IMAGE_REF mybootvol
openstack volume create --image SNAPSHOT_REF mybootvol
openstack server create --volume mybootvol -- flavor ... --nic ... myinstance
nova boot myinstance --nic .. --flavor .. --block-device source=volume,id=VOLUME_ID,dest=volume, size=SIZE,bootindex=0
nova boot myinstance --nic .. --flavor .. --block-device source=image,id=IMAGE_ID,dest=volume, size=SIZE,bootindex=0
nova boot myinstance --nic .. --flavor .. --block-device source=snapshot,id=SNAPSHOT_ID,dest=volume, size=SIZE,bootindex=0
When instances are launched from images, instance’s internal disk is created from image and the instance boot from it’s that disk. Such internal disk is called ephemeral storage. It dissapears when instance is deleted.
* Libvirt (QEMU and KVM) ignore ‘–device’ parameter and you are stuck with whatever device filename Nova assigns
Openstack commands:Cinder commands:
Create snapshot:
openstack snapshot create --name myvol-snap4 myvol*cinder snapshot-create --name myvol-snap4 myvol*
Create volume from snapshot:
openstack volume create --snapshot myvol-snap4 myvol-lastweek
Backup create:Backup restore:
openstack volume backup create ** --name mybck myvol
openstack volume backup create --name mybck --snapshot SNAP myvol
openstack volume backup restore mybck myvol2
*This command will fail on attached volumes, so ‘–force’ parameter must be specified.
** To backup attached volume add parameter ‘–force’ , or ‘–incremental’ which stores the difference between current volume and previous backup. Base for incremental backup is the backup with the most recent timestamp.
*** Cinder client has ‘backup-create’ and ‘backup-restore’ commands.
**** Other openstack commands include ‘delete/list/show/set’ to delete backup, list backups, show backup details and set backup properties respectively.
Recover deleted files from a snapshot:
Identify volume for backup:
openstack volume list --long
Indentify network where to launch new server:
openstack network list
Use clone of the image to launch an instance:
openstack server create --volume myclonevol --nic net-id=... --flavor 1 --keyname mykey volserver
To connect to this image security group and floating ip must be assigned:
openstack server add security group volserver ssh
openstack server add floating ip volserver
ssh -i mykey.pem username@ip_address
Create some files to be recovered:
cp /etc/passwd /home/username/file1
cp /etc/fstab /home/username/file2
Create a snapshot of a still attached volume:
openstack snapshot create --name myclonesnap myclonevol --force
Remove a file:
rm /home/username/file1
Since snapshot can not be attached to instance create volume first:
openstack volume create --snapshot myclonesnap --size 1 tempvol
Attach the new volume
openstack server add volume volserver tempvol
Log back to the server:
ssh -i mykey.pem username@ip_address
List block storage devices:
Mount the new attached volume:
mount /dev/vdb1 /mnt/temp
List the same directory from backup volume, both files should be present:
ls -la /mnt/temp/home/username/
Copy the backup file from temporary mount to previous location:
cp /mnt/temp/home/username/file1 /home/username/
Unmount backup volume:
umount /mnt/temp
Remove and delete redundant copy of the snapshot data:
openstack server remove ovlume volserver tempvol
openstack volume delete tempvol
Backup up and restoring volumes:
Identify the server:
openstack server list
Show volume details:
openstack volume show myclonevol
Create backup of this attached volume:
openstack volume backup create --name myclonevol.backup.$date +%y%m%d) --force
Check the backup progress*:
openstack volume backup list
Create another file on the instance:
ssh -i mykey.pem username@ip_address
cp /etc/group /home/username/file3
Create an incremental backup:
openstack volume backup create --name myclonevol.backup.$date +%y%m%d)-1 --force --incremental
Check the incremental backup progress:
openstack volume backup list
Inspect the backups:
openstack volume backup show myclonevol.backup.YYMMDD**
openstack volume backup show myclonevol.backup.YYMMDD-1***
Add another file to the instance:
ssh -i mykey.pem username@ip_address
cp /etc/shadow/home/username/file4
Create another incremental backup:
openstack volume backup create --name myclonevol.backup.$date +%y%m%d)-2 --force --incremental
Check the incremental backup progress:
openstack volume backup list
Inspect the incremental backups::
openstack volume backup show myclonevol.backup.YYMMDD-1****
Simulate failure by removing files from this instance*****:
rm -rf /
Identify backup to restore the instance from:
openstack volume backup list
Restore most recent backup to an empty volume******:
openstack volume create --size 1 myclonevol2
openstack volume backup restore myclonevol.backup.YYMMDD-2 myclonevol2
Launch an instance from the restored volume:
openstack server create --volume myclonevol2 --nic net-id=... --flavor 1 --key-name mykey volserver-restored
Add a security group and floating IP address and login:
openstack server add security group volserver-restored ssh
openstack server add floating ip volserver
ssh -i mykey.pem username@
Check that all files are restored:
ls -la /
* Backup process takes some time, and while working it is shown as ‘creating’ when completed it will show ‘available’.
** Initial backup should have ‘has_dependant_backups’ set to ‘True’ and ‘is_incremental’ set to ‘False’.
*** Incremental backup should have ‘is_incremental’ set to ‘True’ and has no dependant backups so ‘has_dependant_backups’ is ‘False’.
**** Incremental backup is always created from the backup with the latest timestamp, which in this case is our previous incremental backup who should have now ‘has_dependant_backups’ set to ‘True’.
***** System is now broken beyond repair. Something that I always wanted to do 🙂
****** To restore to attached volume instance must be shutdown, that is why we are restoring to empty volume.
******* To enable volume backup option in the horizon dashboard ‘/etc/openstack-dashboard/local_settings’ should be edited and ‘OPENSTACK_CINDER_FEATURES = { ‘enable_backup’: False. }’ should be changed to ‘True’.
Openstack commands:Swift commands:
Creating containers and objects:
openstack container create myphotos
openstack object create myphotos moon.jpg
swift post myphotos
swift upload myphotos moon.jpg*
Access data via URL:
Object with ‘/’ in the nameSwitch change object name:
openstack object create myphotos localdir/moon.jpg**swift upload myphotos localdir/moon.jpg --object-name=moon.jpg
Show object details:
openstack show object myphotos moon.jpgswift stat myphotos moon.jpg -v
Deleting an object:
openstack object delete myphotos moon.jpg ***
List containers and objects:
openstack container list
openstack object list myphotos --long
swift list --lh
swift list myphotos --lh
Downloading an object:
openstack object save myphotos sun/2020.jpg****
Downloading objects with wget:
wget --user demo --password ******** $OBJECT_URL
Setting Metadata:
openstack object store account set --property category=astronomy
openstack container set --property type=pictures myphotos
opestack object set --property location=japan myphotos moon.jpg
swift post -m location:japan myphotos moon.jpg
Deleting Metadata*****:
openstack object store account unset --property category=astronomy
openstack container unset --property type=pictures myphotos
opestack object unset --property location=japan myphotos moon.jpg
swift post -H "X-Remove-Object-Meta-Location: x"****** myphotos moon.jpg
* Swift upload command creates container if it does not exist.
** Name of the object created with openstack command can not be changed, while swift client can change the name.
*** Deleted objects can not be undeleted.
**** Will create direcotry ‘sun’ and store ‘2020.jpg’ in it. You could also specify an alternate local filename with ‘–file sun.jpg’ parameter.
*****Setting an empty metadata item also deletes it, but is not documented.
****** String ‘Location’ is actually the attribute you want to remove from object and ‘x’ is to satisfy HTTP syntax and it is ignored.
Access control lists:
Permissions based on PROJECT:USER
demo:demo *:admin *:*
Permissions based on referrer:
Set ACLs:
swift post -r ACL CONTAINER
swift post -w ACL CONTAINER
Clear ACLs:
swift post -r "" CONTAINER
swift post -w "" CONTAINER
Downloading object with wget using auth-token header:
wget --header "x-auth-token: TOKEN_UUID*" $OBJECT_URL
Allow any referrer to access file:
swift post myvideos --read-acl '.referrer:*'**
* TOKEN_UUID value is ‘Auth Token’ value shown with ‘swift stat -v myphotos moon.jpg”
** In oreder to allow listings of a container add parameter ‘.referrer:*,.rlistings’ instead of just ‘.referrer:*’.
Temporary URLs:
Create TempURL key:
openstack object store account set --property temp-url-key=abc123
openstack container set --property temp-url-key=abc123
Generate TempURL:
swift tempurl GET 86400 /v1/AUTH_..../myvideos/vid.mp4 abc123*
Generated URL:
Download TempURL with wget
wget -O my-temp-titan.mp4 "http://CLOUD_IP:8080/v1/AUTH_..../myvideos/vid.mp4?temp_url_sig=fa28...&temp_url_expires=..."
* To allow read access GET is used, 86400 is the validity of the temporary URL in seconds.
Large objects:
Upload object and segment it into smaller parts:
swift upload --segment-size=100M mycontainer bigobject*
swift upload --segment-size=1M big-container myvideo.mp4
Show object details:
openstack object list big-container --long
openstack object show big-container myvideo.mp4
openstack object list big-container_segments --long
openstack object list big-container_segments
Delete containers:
openstack container delete big-container**
swift delete big-container***
* Any size objects can be segmented, but since limitation on object size is 5GB larger objects than this must be split into segments. Container ‘mycontainer’ in this case is empty it only has metadata. The actual data goes into second container whose name is derived from original container f.e. ‘mycontainer_segments’ where one object per segment is stored.
** Openstack will not delete container while there are segments in it.
*** Swift client will delete the main container ‘big-container’ and all it’s segments, but it will not delete container ‘big-container_segments’.

Create swap file on Raspberry Pi Zero W

First check if you have swap configured:

swapon --show

IF empty you can create for example 256MB swap file with following commands:

fallocate -l 256M /var/swap
chmod 600 /var/swap
mkswap /var/swap
swapon /var/swap

You can again check if swap is properly configured:

#swapon --show
/var/swap file 256M   0B   -2

#free -m
              total        used        free      shared  buff/cache   available
Mem:            480         109         140           9         229         312
Swap:           255           0         255