Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

October 20 2019

23:30

User permissions checking

Background

I’ve currently been working on restricting user access to executables on a Linux box. I removed all executable rights for others and added them via access control lists for certain groups. So for example for cat it looked like this:
# getfacl /usr/bin/cat
getfacl: Removing leading '/' from absolute path names
# file: usr/bin/cat
# owner: root
# group: root
user::rwx
group::r-x
other::r--

For an executable I want the user access to the permissions looked like this:


# getfacl /usr/bin/ping
getfacl: Removing leading '/' from absolute path names
# file: usr/bin/ping
# owner: root
# group: root
user::rwx
group::r-x
group:staff:--x
mask::r-x
other::r--

The user is in the staff group and can execute ‘ping’, while everyone else get’s a permission denied.

The Test

As an automated test, I thought I go over all commands and produce a whitelist of executables a given user has access to.

The script looks a bit like this for a single executable:

# cat /tmp/foo.py
import os

# the users group id
os.setgid(2000)
# the users ID
os.setuid(2003)

print(os.access(‘/usr/bin/cat’, os.X_OK))
print(os.access(‘/usr/bin/ping’, os.X_OK))

When I run it I expected the test for the first executable to be false and the second to be true:


# python /tmp/foo.py
True
True

Aeh what? Doesn’t the script run as my user who’s in the staff group?

Turns out there is more to the process than just the group and user id. There are also supplementary groups and capabilities. When changing the script to call print(os.getgroups()) it printed the supplementary groups of the user I was running the script as, which was root in this circumstance. Changing the script to also set the supplementary groups to the one of the user:


import os

# the users group id
os.setgid(2000)
os.setgroups([2000, 2003])
# the users ID
os.setuid(2003)

print(os.access(‘/usr/bin/cat’, os.X_OK))
print(os.access(‘/usr/bin/ping’, os.X_OK))

and running it returns the right results:

# python /tmp/foo.py
False
True

Caveats

Restricting permissions with ACLs and testing the way I demonstrated above can lead to false positives for scripts. You can not remove executable permissions from the script interpreter (e.g. /usr/bin/python) while keeping it with an ACL on the actual script. The test above will tell you it’s all fine and dandy, while in reality the user will run into a permission denied.

February 07 2019

04:32

GHC: can’t find a package database

In case you’re using the nix package manager your nix build fails with:


these derivations will be built:
/nix/store/7xk0m6r07x85rwlh01b3wvq8bbzwbw1n-purebred-0.1.0.0.drv
/nix/store/dmj2ax3qsa55jjl6by9fb9sk929k98nl-ghc-8.6.3-with-packages.drv
/nix/store/j9fl8cmq9c6kjnz9dj79rmbs1kzafyys-purebred-with-packages-8.6.3.drv
building '/nix/store/7xk0m6r07x85rwlh01b3wvq8bbzwbw1n-purebred-0.1.0.0.drv'...
setupCompilerEnvironmentPhase
Build with /nix/store/cclv7n6jr311i5ywwkms1m3iz4lsg37j-ghc-8.6.3.
unpacking sources
unpacking source archive /nix/store/j23vlzlg2rmqy0a706h235j4v9zh4m9s-purebred
source root is purebred
patching sources
compileBuildDriverPhase
setupCompileFlags: -package-db=/build/setup-package.conf.d -j4 -threaded
Loaded package environment from /build/purebred/.ghc.environment.x86_64-linux-8.6.3
ghc: can't find a package database at /home/rjoost/.cabal/store/ghc-8.6.3/package.db
builder for '/nix/store/7xk0m6r07x85rwlh01b3wvq8bbzwbw1n-purebred-0.1.0.0.drv' failed with exit code 1
cannot build derivation '/nix/store/dmj2ax3qsa55jjl6by9fb9sk929k98nl-ghc-8.6.3-with-packages.drv': 1 dependencies couldn't be built
cannot build derivation '/nix/store/j9fl8cmq9c6kjnz9dj79rmbs1kzafyys-purebred-with-packages-8.6.3.drv': 1 dependencies couldn't be built
error: build of '/nix/store/j9fl8cmq9c6kjnz9dj79rmbs1kzafyys-purebred-with-packages-8.6.3.drv' failed

then the solution to it is actually easier then you think. It happens when you run


cabal new-repl

inside a nix shell, because cabal creates a hidden environment file. So look for a


.ghc.environment.--
# for example on Linux with GHC 8.6.3
.ghc.environment.x86_64-linux-8.6.3

Delete it and you should be good to go.

December 14 2018

02:04

Docker volume mount fails

I recently stumbled over this odd error message in one of our gitlab runners:

ERROR: for nginx_proxy Cannot start service load_balancer: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"rootfs_linux.go:58: mounting \\\"/builds/group/project/nginx/nginx.conf\\\" to rootfs \\\"/data/docker/overlay2/88fb8a0ee201dd14cfc9aa9befe4d7a5eb28e5ec816a2d76726040316853ed11/merged\\\" at \\\"/data/docker/overlay2/88fb8a0ee201dd14cfc9aa9befe4d7a5eb28e5ec816a2d76726040316853ed11/merged/etc/nginx/nginx.conf\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type

It is the result of using the docker-compose up myservice command which is defined to use just an image and mounts files like so:

- ./nginx/nginx.conf:/etc/nginx/nginx.conf

I’ve spent a bit of time figuring out what the underlying problem is. In hindsight, the error message already gives it away, but I was unable to reproduce the issue on my host machine. That is because the problem is actually more related to docker than your host.

When I found out, that the runner in gitlab is actually a docker container, it dawned upon me, that the operation we do here is a container-in-a-container operation.  The container typically shares the same docker instance with the host system. The bind mount actually happens on the host machine. It tries to mount the path from a directory/file which doesn’t exist on the host machine.

To verify if we can reproduce the same error on the host, I tried to bind mount a volume with a path which doesn’t exist and voila:

$ sudo docker run --rm -it --volume /build/nginx/nginx.conf:/etc/nginx/nginx.conf nginx --help
docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"rootfs_linux.go:58: mounting \\\"/build/nginx/nginx.conf\\\" to rootfs \\\"/var/lib/docker/devicemapper/mnt/2fab14f3dc592d19b1408618a5ba26e88e334d88fe6b7524dc6c30bb0d26bbfc/rootfs\\\" at \\\"/var/lib/docker/devicemapper/mnt/2fab14f3dc592d19b1408618a5ba26e88e334d88fe6b7524dc6c30bb0d26bbfc/rootfs/etc/nginx/nginx.conf\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.

Be weary of running docker in docker when you need to bind mount volumes. Prefer a bare metal or VM as a runner.

December 03 2018

09:30

“Start request repeated too quickly”

If one of your units is not running any more and you find this in your journal: 


● getmail.service - getmail
Loaded: loaded (/home/rjoost/.config/systemd/user/getmail.service; enabled; vendor preset: enabled)
Active: failed (Result: start-limit-hit) since Thu 2018-11-29 18:42:17 AEST; 3s ago
Process: 20142 ExecStart=/usr/bin/getmail --idle=INBOX (code=exited, status=0/SUCCESS)
Main PID: 20142 (code=exited, status=0/SUCCESS)

Nov 29 18:42:17 bali systemd[3109]: getmail.service: Service hold-off time over, scheduling restart.
Nov 29 18:42:17 bali systemd[3109]: getmail.service: Scheduled restart job, restart counter is at 5.
Nov 29 18:42:17 bali systemd[3109]: Stopped getmail.
Nov 29 18:42:17 bali systemd[3109]: getmail.service: Start request repeated too quickly.
Nov 29 18:42:17 bali systemd[3109]: getmail.service: Failed with result 'start-limit-hit'.
Nov 29 18:42:17 bali systemd[3109]: Failed to start getmail.

it might be because your command really exits immediately and you may want to run the command manually to verify if that’s the case. Also check if you indeed have the unit configured with

Restart: always

.

I you’re sure it really does not restart too quickly, you can reset the counter with:

$ systemctl reset-failed unit

Further information can be found in the man pages of: systemd.unit(5) and systemd.service(5)

July 23 2018

04:11

Best practices for diffing two online MySQL databases

We’ve had to move our internal Red Hat Beaker instance to a new MySQL database version. We made the jump with a 5min downtime of Beaker. One of the items we wanted to make sure is to not to loose any data.

Setup and Motivation

A database dump is about 135 GB compressed with gzip. The main database was being served by a MySQL 5.1 master/slave setup.

We discussed two possible strategies for switching to MariaDB. Either a dump and load which meant a downtime of 16h, or the use of an additional MariaDB slave which will be promoted to the new master. We chose the latter: a new MariaDB 10.2 slave promoted to be the new master.

We wanted to make sure that both slaves, the MySQL 5.1 and new MariaDB 10.2, were in sync and with promoting the MariaDB 10.2 slave to master we would not loose any data. To verify data consistency across the slaves, we diffed both databases.

Diffing

I went through a few iterations of dumping and diffing. Here are the items, which worked best.

Ignore mysql-utils if you only have read access

MySQL comes with a bunch of utilities and one of them is a tool to compare two databases, called mysqldbcompare and mysqldiff. I’ve tried mysqldiff first, but, after studying the source code, decided against using it. Reason being is that you will have to grant it additional write privileges to the databases which are arguably small, but still too much I was comfortable with.

Use the “at” utility to schedule mysqldump

The best way I found to kick off performing the database dumps at the same time is to use at. Scheduling a mysqldump manually for the two databases introduces way too much noisy differences. I guess, it goes without mention, that the database hosts clocks are synchronized (e.g. by the use of chronyd).

Dump the entire database at once

The mysqldump tool can dump each table separately, but that is not what you want. Also the default options which are geared towards a dump and load is not what you want.

Instead I dumped MySQL with:

mysqldump --single-transaction --order-by-primary --skip-extended-insert beaker | gzip > mysql.sql.gz;

while for MariaDB I used:

mysqldump --order-by-primary --skip-extended-insert beaker | gzip > mariadb.sql.gz;

The options used are aiding the later diff:

  • –order-by-primary orders every dumped table row consistently by their primary keys
  • –single-transaction keeps a transaction open until the dump has finished so you get a comparable database snapshot across the two databases for the same starting point
  • –skip-extended-inserts is used to have an INSERT statement for each row, otherwise they’re collapsed to multi-row insert statements which are harder to compare

Compression (GZip) and shell pipes are your friend

With big databases, like the Beaker production database, you want to avoid writing anything uncompressed. Linux ships additional gzip wrappers for cat (zcat), less (zless) and so on, which will help with creating shell pipes in order to process the data.

Cut up the dump

Once you have both database dumps, cut them up into their separate tables. Purpose of this is not to sift through the dumps with your own eye, but rather to cater for diff. The diff tool loads the entire file into memory and you will face, with large database dumps, it is running out of memory quickly:

diff mysql-beaker.sql.gz mariadb-replica-beaker.sql.gz
diff: memory exhausted

While I did found a tool to diff both large files, having a unified diff output is easier to compare data with.

Example: Using gzip and a pipe from my point above:

diff -u <(zcat mysql/table1.sql.gz) <(zcat mariadb/table1.sql.gz) > diffed/table1.diff

Now you can use your SHELL foo to loop over all cut up tables and write the diff into a separate folder which then lets you easily compare.

June 30 2017

01:17
Debugging with RPM packages

April 14 2017

01:00
Profiling Haskell: Don’t chase the red herring

January 24 2017

23:40
Changing a website using the developer console

June 03 2016

16:49

August 07 2015

02:12
PyCon Australia 2015

May 23 2015

romanofski
11:47
6056 4047
revolution
Reposted frompotpants420 potpants420 viagetstoned getstoned

May 12 2015

00:31
git: Moving partial changes between commits

April 22 2015

23:51
Haskell: From N00b to Beginner

April 11 2015

romanofski
12:24
Reposted fromKrebs Krebs viaverschwoerer verschwoerer
romanofski
12:20
0316 9525 450
no time to stop, I shit in run
Reposted frombecurious becurious viagetstoned getstoned
12:17

Lorraine Loots’ Microscopic Watercolor Paintings Of The Cosmic Universe The Size Of Your Thumbnail

Lorraine Loots - Watercolor Lorraine Loots - Watercolor Lorraine Loots - Watercolor Lorraine Loots - Watercolor

Can you imagine trying to fit images of the cosmic universe into a circle only an inch, inch and a half wide? Artist Lorraine Loots accomplishes this with nothing more than watercolors and an incredible eye for detail. Watercolor is known for its unpredictable nature and organic qualities. Being able to control this medium in a realistic manner in such a small space speaks volumes to Loots artistic skill.  She renders her miniatures paintings on themed days throughout the year, completion date included.

In the series titled Microcosm Mondays, extremely tiny watercolor paintings depicting celestial images of outer space are created, one of which is a reference to a real photograph taken by the Hubble Space Telescope. This project gives us other equally clever names, each with their own mini-series. These include Tiny Tuesdays, Free Fridays, and with a play on words, Fursdays. Each series having a different theme, guess what this artist draws on Fursdays… cute little furry animals! All so incredibly detailed, down to the last hair and whisker. Each series is drawn on different days of the week, and at the end of the year, a total of 100 microscopic paintings will be completed. What makes Loot’s small masterpieces even more fun is that once one is completed, it is auctioned off on Instagram! So now there not only an element of surprise what day she will post her delicate piece, but also a factor of chance as you bid to have one for yourself. Don’t miss the action and check out Loots Instagram here. (via MyModernMet)

Lorraine Loots - WatercolorLorraine Loots - Watercolor Lorraine Loots - Watercolor Lorraine Loots - Watercolor Lorraine Loots - WatercolorLorraine Loots - Watercolor Lorraine Loots - Watercolor Lorraine Loots - Watercolor Lorraine Loots - Watercolor

 

The post Lorraine Loots’ Microscopic Watercolor Paintings Of The Cosmic Universe The Size Of Your Thumbnail appeared first on Beautiful/Decay Artist & Design.

Reposted fromcuty cuty viagetstoned getstoned
romanofski
12:10
6185 be2c
Reposted fromKane1337 Kane1337 viagetstoned getstoned

April 03 2015

romanofski
04:47
8768 8013 450
beer drinks
Reposted frommajkey majkey viatomster tomster
romanofski
04:47
8768 8013 450
beer drinks
Reposted frommajkey majkey viatomster tomster
04:41
1431 5cba 450

bloodtributes:

kirstenelizabethh:

computerfag:

Handwriting goals.

This is so aesthetically pleasing.

This isn’t possible

Reposted fromKillah883 Killah883 viagetstoned getstoned
Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl