Upgrade flaresolverr docker container

Today I noticed CAPTCHA was not getting resolved in Jackett but without obvious reason. I thought that flaresolverr rlease I was using was outdated so I wanted to try to update it to latest one. First I have checked what I have running:

# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED        STATUS        PORTS                    NAMES
f144c092302c   a88ebf3195a5   "/usr/bin/dumb-init …"   4 months ago   Up 10 hours   0.0.0.0:8191->8191/tcp   flaresolverr

Pulled new image:

docker image pull ghcr.io/flaresolverr/flaresolverr:latest

Checked if the images are there:

# docker images
REPOSITORY                          TAG       IMAGE ID       CREATED        SIZE
ghcr.io/flaresolverr/flaresolverr   latest    5d07ec0ae1eb   4 weeks ago    574MB
ghcr.io/flaresolverr/flaresolverr   <none>    a88ebf3195a5   4 months ago   569MB

Stop and delete the container:

docker stop f144c092302c
docker rm f144c092302c

Once container has been stopped and deleted. I have started a new instance:

docker run -d   --name=flaresolverr   -p 8191:8191   -e LOG_LEVEL=info   --restart unless-stopped   ghcr.io/flaresolverr/flaresolverr:latest

Check if new container is running:

# docker ps
CONTAINER ID   IMAGE                                      COMMAND                  CREATED         STATUS        PORTS                              NAMES
e8ec7ca07cf3   ghcr.io/flaresolverr/flaresolverr:latest   "/usr/bin/dumb-init …"   3 seconds ago   Up 1 second   0.0.0.0:8191->8191/tcp, 8192/tcp   flaresolverr

Cleanup of the unused image:

# docker images
REPOSITORY                          TAG       IMAGE ID       CREATED        SIZE
ghcr.io/flaresolverr/flaresolverr   latest    5d07ec0ae1eb   4 weeks ago    574MB
ghcr.io/flaresolverr/flaresolverr   <none>    a88ebf3195a5   4 months ago   569MB

# docker image rm a88ebf3195a5

# docker images
REPOSITORY                          TAG       IMAGE ID       CREATED       SIZE
ghcr.io/flaresolverr/flaresolverr   latest    5d07ec0ae1eb   4 weeks ago   574MB

Openstack storage cookbook list of useful commands with examples

Here is the list of openstack storage commands and examples that I collected and found useful:

Openstack commands:Cinder commands:
Volume opearations:Cinder operations:
openstack volume create --size 1 [NAME]
openstack volume create --size 1 --image [IMAGE] [NAME]
openstack volume create --size 1 --source [VOLUME] [NAME]
openstack volume create --size 1 --snapshot [SNAPSHOT] [NAME]
openstack volume list
openstack volume show [VOLUME]
openstack volume set [VOLUME] --size 2
openstack volume set [VOLUME] --name [NEW_NAME]
openstack volume set [VOLUME] --property [KEY]=[VALUE]
openstack volume delete [VOLUME]*
cinder create 1
cinder create 1 --image [IMAGE]
cinder create 1 --source-volid [VOLUME_ID]
cinder create 1 --snapshot-id [SNAPSHOT_ID]
cinder list
cinder show [VOLUME]
cinder extend [VOLUME] 2
cinder rename [VOLUME] [NEW_NAME]
cinder metadata [VOLUME] set [KEY]=[VALUE]
cinder delete [VOLUME]*
Volumes can not be smaller than 1GB, that is why –size 1 is used.
* It is not posible to undo a delete operation
** Adding –type [VOLUME_TYPE] to create options
Volume types:
Volume operations:
openstack volume type create thin --property volume_backend_name=lvm --property lvm:provisioning=thin*
openstack volume types
openstack volume type show [NAME]
openstack volume create --type thin --size 1 [thinvol]
openstack volume list --long**
* In reality lvm backend does not have parameter provisioning, actually it has no parameters at all, but this “fake” property “thin” is used to show how types are used
** In order to see parameter type use the “–long” option
Openstack commands:Nova commands:
Attaching and detaching volumes:
openstack server add volume SERVER_REF VOLUME_REF
openstack server add volume --device /dev/vdc* SERVER_REF VOLUME_REF
openstack server remove volume SERVER_REF VOLUME_REF
nova volume-attach SERVER_REF VOLUME_ID [DEVICE]
nova volume-dettach SERVER_REF VOLUME_ID
Boot from a volume CLI commands:
openstack volume create --image IMAGE_REF mybootvol
openstack volume create --image SNAPSHOT_REF mybootvol
openstack server create --volume mybootvol -- flavor ... --nic ... myinstance
nova boot myinstance --nic .. --flavor .. --block-device source=volume,id=VOLUME_ID,dest=volume, size=SIZE,bootindex=0
nova boot myinstance --nic .. --flavor .. --block-device source=image,id=IMAGE_ID,dest=volume, size=SIZE,bootindex=0
nova boot myinstance --nic .. --flavor .. --block-device source=snapshot,id=SNAPSHOT_ID,dest=volume, size=SIZE,bootindex=0
When instances are launched from images, instance’s internal disk is created from image and the instance boot from it’s that disk. Such internal disk is called ephemeral storage. It dissapears when instance is deleted.
* Libvirt (QEMU and KVM) ignore ‘–device’ parameter and you are stuck with whatever device filename Nova assigns
Openstack commands:Cinder commands:
Create snapshot:
openstack snapshot create --name myvol-snap4 myvol*cinder snapshot-create --name myvol-snap4 myvol*
Create volume from snapshot:
openstack volume create --snapshot myvol-snap4 myvol-lastweek
Backup create:Backup restore:
openstack volume backup create ** --name mybck myvol
openstack volume backup create --name mybck --snapshot SNAP myvol
***
openstack volume backup restore mybck myvol2
*This command will fail on attached volumes, so ‘–force’ parameter must be specified.
** To backup attached volume add parameter ‘–force’ , or ‘–incremental’ which stores the difference between current volume and previous backup. Base for incremental backup is the backup with the most recent timestamp.
*** Cinder client has ‘backup-create’ and ‘backup-restore’ commands.
**** Other openstack commands include ‘delete/list/show/set’ to delete backup, list backups, show backup details and set backup properties respectively.
Recover deleted files from a snapshot:
Identify volume for backup:
openstack volume list --long
Indentify network where to launch new server:
openstack network list
Use clone of the image to launch an instance:
openstack server create --volume myclonevol --nic net-id=... --flavor 1 --keyname mykey volserver
To connect to this image security group and floating ip must be assigned:
openstack server add security group volserver ssh
openstack server add floating ip volserver 172.24.4.230
ssh -i mykey.pem username@ip_address
Create some files to be recovered:
cp /etc/passwd /home/username/file1
cp /etc/fstab /home/username/file2
Create a snapshot of a still attached volume:
openstack snapshot create --name myclonesnap myclonevol --force
Remove a file:
rm /home/username/file1
exit
Since snapshot can not be attached to instance create volume first:
openstack volume create --snapshot myclonesnap --size 1 tempvol
Attach the new volume
openstack server add volume volserver tempvol
Log back to the server:
ssh -i mykey.pem username@ip_address
List block storage devices:
lsblk
Mount the new attached volume:
mount /dev/vdb1 /mnt/temp
List the same directory from backup volume, both files should be present:
ls -la /mnt/temp/home/username/
Copy the backup file from temporary mount to previous location:
cp /mnt/temp/home/username/file1 /home/username/
Unmount backup volume:
umount /mnt/temp
Remove and delete redundant copy of the snapshot data:
openstack server remove ovlume volserver tempvol
openstack volume delete tempvol
Backup up and restoring volumes:
Identify the server:
openstack server list
Show volume details:
openstack volume show myclonevol
Create backup of this attached volume:
openstack volume backup create --name myclonevol.backup.$date +%y%m%d) --force
Check the backup progress*:
openstack volume backup list
Create another file on the instance:
ssh -i mykey.pem username@ip_address
cp /etc/group /home/username/file3
Create an incremental backup:
openstack volume backup create --name myclonevol.backup.$date +%y%m%d)-1 --force --incremental
Check the incremental backup progress:
openstack volume backup list
Inspect the backups:
openstack volume backup show myclonevol.backup.YYMMDD**
openstack volume backup show myclonevol.backup.YYMMDD-1***
Add another file to the instance:
ssh -i mykey.pem username@ip_address
cp /etc/shadow/home/username/file4
Create another incremental backup:
openstack volume backup create --name myclonevol.backup.$date +%y%m%d)-2 --force --incremental
Check the incremental backup progress:
openstack volume backup list
Inspect the incremental backups::
openstack volume backup show myclonevol.backup.YYMMDD-1****
Simulate failure by removing files from this instance*****:
rm -rf /
exit
Identify backup to restore the instance from:
openstack volume backup list
Restore most recent backup to an empty volume******:
openstack volume create --size 1 myclonevol2
openstack volume backup restore myclonevol.backup.YYMMDD-2 myclonevol2
Launch an instance from the restored volume:
openstack server create --volume myclonevol2 --nic net-id=... --flavor 1 --key-name mykey volserver-restored
Add a security group and floating IP address and login:
openstack server add security group volserver-restored ssh
openstack server add floating ip volserver 172.24.4.233
ssh -i mykey.pem [email protected]
Check that all files are restored:
ls -la /
* Backup process takes some time, and while working it is shown as ‘creating’ when completed it will show ‘available’.
** Initial backup should have ‘has_dependant_backups’ set to ‘True’ and ‘is_incremental’ set to ‘False’.
*** Incremental backup should have ‘is_incremental’ set to ‘True’ and has no dependant backups so ‘has_dependant_backups’ is ‘False’.
**** Incremental backup is always created from the backup with the latest timestamp, which in this case is our previous incremental backup who should have now ‘has_dependant_backups’ set to ‘True’.
***** System is now broken beyond repair. Something that I always wanted to do 🙂
****** To restore to attached volume instance must be shutdown, that is why we are restoring to empty volume.
******* To enable volume backup option in the horizon dashboard ‘/etc/openstack-dashboard/local_settings’ should be edited and ‘OPENSTACK_CINDER_FEATURES = { ‘enable_backup’: False. }’ should be changed to ‘True’.
Openstack commands:Swift commands:
Creating containers and objects:
openstack container create myphotos
openstack object create myphotos moon.jpg
swift post myphotos
swift upload myphotos moon.jpg*
Access data via URL:
http://CLOUD-ADDRESS:8080/v1/ACCOUNT/myphotos/moon.jpg
Object with ‘/’ in the nameSwitch change object name:
openstack object create myphotos localdir/moon.jpg**swift upload myphotos localdir/moon.jpg --object-name=moon.jpg
Show object details:
openstack show object myphotos moon.jpgswift stat myphotos moon.jpg -v
Deleting an object:
openstack object delete myphotos moon.jpg ***
List containers and objects:
openstack container list
openstack object list myphotos --long
swift list --lh
swift list myphotos --lh
Downloading an object:
openstack object save myphotos sun/2020.jpg****
Downloading objects with wget:
wget --user demo --password ******** $OBJECT_URL
Setting Metadata:
openstack object store account set --property category=astronomy
openstack container set --property type=pictures myphotos
opestack object set --property location=japan myphotos moon.jpg
swift post -m location:japan myphotos moon.jpg
Deleting Metadata*****:
openstack object store account unset --property category=astronomy
openstack container unset --property type=pictures myphotos
opestack object unset --property location=japan myphotos moon.jpg
swift post -H "X-Remove-Object-Meta-Location: x"****** myphotos moon.jpg
* Swift upload command creates container if it does not exist.
** Name of the object created with openstack command can not be changed, while swift client can change the name.
*** Deleted objects can not be undeleted.
**** Will create direcotry ‘sun’ and store ‘2020.jpg’ in it. You could also specify an alternate local filename with ‘–file sun.jpg’ parameter.
*****Setting an empty metadata item also deletes it, but is not documented.
****** String ‘Location’ is actually the attribute you want to remove from object and ‘x’ is to satisfy HTTP syntax and it is ignored.
Access control lists:
Permissions based on PROJECT:USER
demo:demo *:admin *:*
Permissions based on referrer:
.r:* .r:erol.name
Set ACLs:
swift post -r ACL CONTAINER
swift post -w ACL CONTAINER
Clear ACLs:
swift post -r "" CONTAINER
swift post -w "" CONTAINER
Downloading object with wget using auth-token header:
wget --header "x-auth-token: TOKEN_UUID*" $OBJECT_URL
Allow any referrer to access file:
swift post myvideos --read-acl '.referrer:*'**
* TOKEN_UUID value is ‘Auth Token’ value shown with ‘swift stat -v myphotos moon.jpg”
** In oreder to allow listings of a container add parameter ‘.referrer:*,.rlistings’ instead of just ‘.referrer:*’.
Temporary URLs:
Create TempURL key:
openstack object store account set --property temp-url-key=abc123
openstack container set --property temp-url-key=abc123
Generate TempURL:
swift tempurl GET 86400 /v1/AUTH_..../myvideos/vid.mp4 abc123*
Generated URL:
/v1/AUTH_..../myvideos/vid.mp4?temp_url_sig=fa28...&temp_url_expires=...
Download TempURL with wget
wget -O my-temp-titan.mp4 "http://CLOUD_IP:8080/v1/AUTH_..../myvideos/vid.mp4?temp_url_sig=fa28...&temp_url_expires=..."
* To allow read access GET is used, 86400 is the validity of the temporary URL in seconds.
Large objects:
Upload object and segment it into smaller parts:
swift upload --segment-size=100M mycontainer bigobject*
Example:
swift upload --segment-size=1M big-container myvideo.mp4
Show object details:
openstack object list big-container --long
openstack object show big-container myvideo.mp4
openstack object list big-container_segments --long
openstack object list big-container_segments
Delete containers:
openstack container delete big-container**
swift delete big-container***
* Any size objects can be segmented, but since limitation on object size is 5GB larger objects than this must be split into segments. Container ‘mycontainer’ in this case is empty it only has metadata. The actual data goes into second container whose name is derived from original container f.e. ‘mycontainer_segments’ where one object per segment is stored.
** Openstack will not delete container while there are segments in it.
*** Swift client will delete the main container ‘big-container’ and all it’s segments, but it will not delete container ‘big-container_segments’.