Sudo cp path to profit usr local bin là gì
Hi! I’m writing this article as a mini-HOWTO on how to setup a btrfs-raid1 volume on encrypted disks (luks). This page servers as my personal guide/documentation, althought you can use it with little intervention. Show
Disclaimer: Be very careful! This is a mini-HOWTO article, do not copy/paste commands. Modify them to fit your environment.
PrologueI had to replace one of my existing data/media setup (btrfs-raid0) due to some random hardware errors in one of the disks. The existing disks are 7.1y WD 1TB and the new disks are WD Purple 4TB.
This will give me about 3.64T (from 1.86T). I had concerns with the slower RPM but in the end of this article, you will see some related stats. My primarly daily use is streaming media (video/audio/images) via minidlna instead of cifs/nfs (samba), although the service is still up & running. DisksIt is important to use disks with the exact same size and speed. Usually for Raid 1 purposes, I prefer using the same model. One can argue that diversity of models and manufactures to reduce possible firmware issues of a specific series should be preferable. When working with Raid 1, the most important things to consider are:
and all the disks should have the same specs, otherwise size and speed will downgraded to the smaller and slower disk. Identify Disksthe two (2) Western Digital Purple 4TB are manufacture model: WDC WD40PURZ The system sees them as:
try to identify them from the kernel with list block devices:
verify it with hwinfo
with list hardware:
LuksCreate Random Encrypted keysI prefer to use random generated keys for the disk encryption. This is also useful for automated scripts (encrypt/decrypt disks) instead of typing a pass phrase. Create a folder to save the encrypted keys:
create keys with dd against urandom: WD40PURZ-85A
WD40PURZ-85T
verify two (2) 4k size random keys, exist on the above directory with list files:
Format & Encrypt Hard DisksIt is time to format and encrypt the hard disks with Luks Be very careful, choose the correct disk, type uppercase YES to confirm.
0
1 Verify Encrypted Disksprint block device attributes:
2 Open and Decryptopening encrypted disks with luks
3
4 Verify Status
5
6 BTRFSCurrent disks
7 There are a lot of write/read errors :( btrfs version
8 Create BTRFS Raid 1 Filesystemby using mkfs, selecting a disk label, choosing raid1 metadata and data to be on both disks (mirror):
9 or in one-liner (as-root):
0 format output
1 Notice that both disks have the same UUID (Universal Unique IDentifier) number:
2 Verify block device
3 once more, be aware of the same
04 on both disks! Mount new block diskcreate a new mount point
4 append the below entry in /etc/fstab (as-root)
5 and finally, mount it!
6 Disk Usagecheck disk usage and free space for the new encrypted mount point
7 btrfs filesystem disk usage
8 btrfs filesystem show
9 stats
0 btrfs fi disk usagebtrfs filesystem disk usage
1 SpeedUsing hdparm to test/get some speed stats
2 These are the new disks with 5400 rpm, let’s see what the old 7200 rpm disk shows here:
3 So even that these new disks are 5400 seems to be faster than the old ones !! Also, I have mounted as read-only the problematic Raid-0 setup. RsyncI am now moving some data to measure time
4
5
6
7 Control and Monitor Utility for SMART DisksLast but not least, some smart info with smartmontools
8 result :
9 details
0 Second disk
1 selftest results
2 details
3 that’s it ! -ebal Tag: btrfs, raid, raid1, luks Network Booting into Graphical Linux, before it was cool! Posted by ebal at 15:50:56 in blog back in ~2001 I was working part time in my uni lab for some extra cash and a chance to gain some additional knowledge on hardware & linux. I feel that I need to make a disclaimer here and share that prior to christmas of ‘99, I did not own a personal computer or a PC as it is better known. Our tech lab had to format and repair/clone/restore hard disks on a daily basis, as the majority of PCs were failing on a regular interval. That was the result of having 80 to 160 students on 10/15 PCs per lab, running 10/12 hours a day. No one had a dedicated PC/seat. Hard disks were failing left and right. Tech lab had to format/restore them or in case of total failure, order a replacement disk. We had to make these orders in bulk. So, we had to investigating this issue and report back with a solution as the backlog and cost was notable from our uni. From what we have noticed, over 50% of students did not want to wait for logout and safely shutdown their machines. After saving homework on their floppy disks, most of them, just turned off the power button on the back of the tower power supply. Imaging doing that, on hard disk 20years ago, for about 10 times a day, every day on a common lab. After brainstorming about possible solutions and workarounds, finding ways to restore faster these hard disks, making poster with “Please safely shutdown this Computer”, an idea was introduced as an experiment. How about completely remove hard disks from the PCs? We knew it was possible to boot linux from the local network, but can we boot into a graphical environment? We also had to convince lab professors that some of the lab courses can run under linux like Pascal/C/Fortran/Lisp etc. In the end we had to have a dedicated lab for those diskless PCs. Challenge was indeed accepted, and we had to provide and present a PoC (proof of concept). The project was well known as LTSP aka the Linux Terminal Server Project. Half of the team was looking into the server part; the other half was working on the client part. Let us recap for a moment. A PC without a hard disk/operating system, had to boot from network and start into graphical environment. To boot from the network, your bios should have this option. Not an option back then. Then your network card should get some information from the DHCP about tftpd server. Not an option back then. And last your PC should have a proper graphical card that can indeed run X11 (not Xorg, as Xorg was not yet released back then) and remote login to an X server! On top of that CPU & memory was not so “powerful” back then. I will not get into tech details, this is not that kind of post, but I will share a few details. First, we had to tell the bios to boot from the network card and then we had to program the ROM of the network card. That was not so easy back then. Mostly because not all network cards had a programmable ROM, those were the expensive ones. After that we had to get the boot image from the server, boot into RAM and load the boot menu. Selecting the proper OS, get the kernel and initramfs (if needed for extra drivers) and (pivot) boot into a linux operating system. Finally, we had to auto start in X11 remote login client that was configured to connect to the X11 Server. diskless boot, etherboot and EPROM configuration, was the game and floppies were the solution! If not mistaken (I may be, don’t remember everything) I think we had to tell the bios to boot from floppy and set the EPROM configuration of the network card and then boot from network to get the initial boot menu. Get the kernel and run X11, login into the X server and run the lab courses to prove that this is working. That was what indeed do! And for another year and a half, my daily computer in the uni was a diskless PII that was booting from network via a floppy disk! Nowadays you can remote install/format/repair your laptop through the internet, effortless. And my PC had a Turbo button also!! Tag: ltsp VMs on KVM with Terraform many thanks to erethon for his help & support on this article. Working on your home lab, it is quiet often that you need to spawn containers or virtual machines to test or develop something. I was doing this kind of testing with public cloud providers with minimal VMs and for short time of periods to reduce any costs. In this article I will try to explain how to use libvirt -that means kvm- with terraform and provide a simple way to run this on your linux machine. Be aware this will be a (long) technical article and some experience is needed with kvm/libvirt & terraform but I will try to keep it simple so you can follow the instructions. TerraformInstall Terraform v0.13 either from your distro or directly from hashicopr’s site.
4 Libvirtsame thing for libvirt
5 verify that you have access to libvirt
6 Terraform Libvirt ProviderTo access the libvirt daemon via terraform, we need the terraform-libvirt provider. Terraform provider to provision infrastructure with Linux’s KVM using libvirt The official repo is on GitHub - dmacvicar/terraform-provider-libvirt and you can download a precompiled version for your distro from the repo, or you can download a generic version from my gitlab repo ebal / terraform-provider-libvirt · GitLab These are my instructions
7 Terraform InitLet’s create a new directory and test that everything is fine.
8
9 everything seems okay! We can verify with tree or find
0 Providerbut did we actually connect to libvirtd via terraform ? Short answer: No. We just told terraform to use this specific provider. How to connect ? We need to add the connection libvirt uri to the provider section:
1 so our Provider.tf looks like this
2 Libvirt URIlibvirt is a virtualization api/toolkit that supports multiple drivers and thus you can use libvirt against the below virtualization platforms
Libvirt also supports multiple authentication mechanisms like ssh
3 so it is really important to properly define the libvirt URI in terraform! In this article, I will limit it to a local libvirt daemon, but keep in mind you can use a remote libvirt daemon as well. VolumeNext thing, we need a disk volume! Volume.tf
4 I have already downloaded this image and verified its checksum, I will use this local image as the base image for my VM’s volume. By running
05 we will see this output:
5 What we expect is to use this source image and create a new disk volume (copy) and put it to the default disk storage libvirt pool. Let’s try to figure out what is happening here:
6
7 and
8 Volume SizeBE Aware: by this declaration, the produced disk volume image will have the same size as the original source. In this case ~2G of disk. We will show later in this article how to expand to something larger. destroy volume
9 verify
0 reminder: always destroy! DomainBelieve it or not, we are half way from our first VM! We need to create a libvirt domain resource. Domain.tf
1 Apply the new tf plan
2
3 Verify via virsh:
4 Destroy them!
5 That’s it ! We have successfully created a new VM from a source image that runs on our libvirt environment. But we can not connect/use or do anything with this instance. Not yet, we need to add a few more things. Like a network interface, a console output and a default cloud-init file to auto-configure the virtual machine. VariablesBefore continuing with the user-data (cloud-init), it is a good time to set up some terraform variables.
6 We are going to use this variable within the user-date yaml file. Cloud-initThe best way to configure a newly created virtual machine, is via cloud-init and the ability of injecting a user-data.yml file into the virtual machine via terraform-libvirt. user-data
7 cloud init diskTerraform will create a new iso by reading the above template file and generate a proper user-data.yaml file. To use this cloud init iso, we need to configure it as a libvirt cloud-init resource. Cloudinit.tf
8 and we need to modify our Domain.tf accordingly
9 Terraform will create and upload this iso disk image into the default libvirt storage pool. And attach it to the virtual machine in the boot process. At this moment the tf_libvirt directory should look like this:
0 To give you an idea, the abstract design is this: apply
1
2 Lots of output , but let me explain it really quick: generate a user-data file from template, template is populated with variables, create an cloud-init iso, create a volume disk from source, create a virtual machine with this new volume disk and boot it with this cloud-init iso. Pretty much, that’s it!!!
3 destroy
4 Most important detail is:
06
Consolebut there are a few things still missing. To add a console for starters so we can connect into this virtual machine! To do that, we need to re-edit Domain.tf and add a console output:
5 the full file should look like:
6 Create again the VM with
1 And test the console:
8 wow! We have actually logged-in to this VM using the libvirt console! Virtual Machinesome interesting details
9 Do not forget to destroy
0 extend the volume diskAs mentioned above, the volume’s disk size is exactly as the origin source image. In our case it’s 2G. What we need to do, is to use the source image as a base for a new volume disk. In our new volume disk, we can declare the size we need. I would like to make this a user variable: Variables.tf
1 Arithmetic in terraform!! So the Volume.tf should be:
2 base image –> volume image test it
1
4 10G ! destroy
5 SwapI would like to have a swap partition and I will use cloud init to create a swap partition. modify user-data.yml
6 test it
7
8 success !!
5 NetworkHow about internet? network? Yes, what about it ? I guess you need to connect to the internets, okay then. The easiest way is to add this your Domain.tf
0 This will use the default network libvirt resource
1 if you prefer to directly expose your VM to your local network (be very careful) then replace the above with a macvtap interface. If your ISP router provides dhcp, then your VM will take a random IP from your router.
2 test it
7
4 destroy
5 SSHOkay, now that we have network it is possible to setup ssh to our virtual machine and also auto create a user. I would like cloud-init to get my public key from github and setup my user.
6 Notice, I have added a new variable called
07 Variables.tf
7 and do not forget to update your cloud-init tf Cloudinit.tf
8 Update VMI would also like to update & install specific packages to this virtual machine:
9 Yes, I prefer to uninstall cloud-init at the end. user-date.yamlthe entire user-date.yaml looks like this:
00 OutputWe need to know the IP to login so create a new terraform file to get the IP Output.tf
01 but that means that we need to wait for the dhcp lease. Modify Domain.tf to tell terraform to wait.
02 Plan & Apply
03 Verify
04
05 what !!!! awesome destroy
5 Custom NetworkOne last thing I would like to discuss is how to create a new network and provide a specific IP to your VM. This will separate your VMs/lab and it is cheap & easy to setup a new libvirt network. Network.tf
07 and replace
08 in Domains.tf
08 Closely look, there is a new terraform variable Variables.tf
09
10
11 Terraform filesyou can find every terraform example in my github repo tf/0.13/libvirt/0.6.2/ubuntu/20.04 at master · ebal/tf · GitHub That’s it! If you like this article, consider following me on twitter ebalaskas. Tag: libvirt, kvm, cloud-init, terraform, ubuntu, qemu Curse of knowledge [Original Published at Linkedin on October 28, 2018] The curse of knowledge is a cognitive bias that occurs when an individual, communicating with other individuals, unknowingly assumes that the others have the background to understand. Let’s talk about documentationThis is the one big elephant in every team’s room. TLDR; Increment: Documentation Documentation empowers users and technical teams to function more effectively, and can promote approachability, accessibility, efficiency, innovation, and more stable development. Bad technical guides can cause frustration, confusion, and distrust in your software, support channels, and even your brand—and they can hinder progress and productivity internally so to avoid situations like these: xkcd - wisdom_of_the_ancients or/and Optipess - TitsMcGee4782 documentation must exist! Myths
Problems
Types of documentation
Why Documentation Is ImportantCommunication is a key to success. Documentation is part of the communication process. We either try to communicate or collaborate with our customers or even within our own team. We use our documentation to inform customers of new feautures and how to use them, to train our internal team (colleagues), collaborate with them, reach-out, help-out, connect, communicate our work with others. When writing code, documentation should be the “One-Truth” instead of the code repository. I love working with projects that they will not deploy a new feature before updating the documentation first. For example I read the ‘Release Notes for Red Hat’ and the ChangeLog instead of reading myriads of code repositories. Know Your AudienceTry to answer these questions:
Use personas to create diferrent material. Try to remember this one gold rule: Audidence should get value from documentation (learning or something). GuidelinesHere are some guidelines:
Even on a technical document try to:
UXA picture is worth a thousand words so remember to:
Customers and Users do want to write nothing. Reducing user input, your project will be more fault tolerant. Instead of providing a procedure for a deploy pipeline, give them a
09, a
10 Gui/Web User-Interface and focus your documentation around that. ContentSo what to include in the documentation.
imagine your documentation as microservices instead of a huge monolith project. Usally a well defined structure, looks like this:
ToolsI prefer wiki pages instead of a word-document, because of the below features:
btw if you are using Confluence, there is a Markdown plugin. Measurements & FeedbackTo understand if your documentation is good or not, you need feedback. But first there is an evaluation process of Review. It is the same thing as writing code, you cant review your own code! The first feedback should come within your team. Use analytics to measure if people reads your documentation, from ‘Hits per page’ to more advance analytics as Matomo (formerly Piwik). Profiling helps you understand your audience. What they like in documentation? Customer satisfaction (CSat) are important in documentation metrics.
make it easy for people to share their feedbak and find a way to include their comments in it. FAQFrequently Asked Questions should answering questions in the style of:
FAQ or QA should be really straight forward, short and simple as it can be. You are writing a FAQ because you are here to help customers to learn how to use this specific feature not the entire software. Use links for more detail info, that direct them to your main documentation. ConclusionSharing knowledge & shaping the culture of your team/end users. Your documentation should reflect your customers needs. Everything you do in your business is to satisfy your customers. Documentation is one way to communicate this. So here are some final points on documentation:
How to build a SSH Bastion host [this is a technical blog post, but easy to follow] recently I had to setup and present my idea of a ssh bastion host. You may have already heard this as jump host or a security ssh hoping station or ssh gateway or even something else. The main conceptDisclaimer: This is just a proof of concept (PoC). May need a few adjustments. The destination VM may be on another VPC, perhaps it does not have a public DNS or even a public IP. Think of this VM as not accessible. Only the ssh bastion server can reach this VM. So we need to first reach the bastion. SSH ConfigTo begin with, I will share my initial sshd_config to get an idea of my current ssh setup
12
This configuration is almost identical to both VMs
~/.ssh/configI am using the ssh config file to have an easier user experience when using ssh
13 Create a new user to test thisLet us create a new user for testing. User/Group
14 PermsCopy .ssh directory from current user (<== lazy sysadmin)
15 bastion sshd configEdit the ssh daemon configuration file to append the below entries
11
16 Don’t forget to restart sshd
17 As you have seen above, I now allow two (2) users to access the ssh daemon (AllowUsers). This can also work with AllowGroups Testing bastionLet’s try to connect to this bastion VM
18
19 Interesting … We can not login into this machine. Let’s try with our personal user
20 Perfect. Let’s try from windows (mobaxterm) mobaxterm is putty on steroids! There is also a portable version, so there is no need of installation. You can just download and extract it. Interesting… Destination VMNow it is time to test our access to the destination VM
21 bastion
22
23 Success ! Explain CommandUsing this command
12
So we can have different users! ssh/configNow, it is time to put everything under our
13 file
24 and try again
25 mobaxterm with bastionHow to use cloud-init with Edis It is a known fact, that my favorite hosting provider is edis. I’ve seen them improving their services all these years, without forgeting their customers. Their support is great and I am really happy working with them. That said, they dont offer (yet) a public infrastructre API like hetzner, linode or digitalocean but they offer an Auto Installer option to configure your VPS via a post-install shell script, put your ssh key and select your basic OS image. I am experimenting with this option the last few weeks, but I wanted to use my currect cloud-init configuration file without making many changes. The goal is to produce a VPS image that when finished will be ready to accept my ansible roles without making any addition change or even login to this VPS. So here is my current solution on how to use the post-install option to provide my current cloud-init configuration! cloud-initcloud-init documentation Josh Powers @ DebConf17I will not get into cloud-init details in this blog post, but tldr; has stages, has modules, you provide your own user-data file (yaml) and it supports datasources. All these things is for telling cloud-init what to do, what to configure and when to configure it (in which step). NoCloud SeedI am going to use NoCloud datastore for this experiment. so I need to configure these two (2) files
26 Install cloud-initMy first entry in the post-install shell script should be
27 thus I can be sure of two (2) things. First the VPS has already network access and I dont need to configure it, and second install cloud-init software, just to be sure that is there. VariablesI try to keep my user-data file very simple but I would like to configure hostname and the sshd port.
28 UsersAdd a single user, provide a public ssh key for authentication and enable sudo access to this user.
29 Hardening SSH
30 enable firewall
31 remove cloud-initand last but not least, I need to remove cloud-init in the end
32 Post Install Shell scriptlet’s put everything together
33 That’s it ! After a while (needs a few reboot) our VPS is up & running and we can use ansible to configure it. Tag: edis, cloud-init a story about inclusion in tech Posted by ebal at 14:30:40 in blog last days events, made me rethink of this story. I am not the hero of the story. I was in my early 20s, working part time on the tech lab of my uni. In this lab I met another student, I will call him Bob instead of his real name. I was just a couple months away to get my degree. He was ten years older than me, still trying to go through the studies to get his. We met and for the next couple of weeks, worked together, both part time in this lab. Bob was deaf. He could speak but due to the fact that he could not hear his voice, the words he made were not very clear. He was struggling with the courses. Bob was able to read lips but you had to speak directly to him and not very fast. The majority of our courses had custom textbook and they were difficult. Dual courses, theory and lab was not always on the same subject. Theory was about compilers, lab was about pascal (just to get an idea). It was a difficult time for me. Back then (end of ’90s - begging of ’00s) the internet was not the place it is today. To solve a problem, you had to find the reference manual, read it through, understand it and learn from this process. Nowadays most of us are using search engines to copy/paste commands of software solutions. It wasn’t like that back then. Bob was a good worker. But it was very difficult for him. He could pass some of the labs but he had problems with the theory courses. Ten years of his life in uni. I tried to help him with some of the courses. I was happy I was able to help, he was happy to find someone to help him. But I could not do much. To explain something back then it was not very easy for me. Also I had to slow my speech, find simple words and somehow translate some of the terminology to something that Bob could understand. I wasn’t the best person to do that and he knew it. I don’t know if I ever managed to actually helped him. One time, I asked him:
he replied
I had never understood the privilege I had, till that moment. This was a true lesson for me. It was hard, It was difficult. I was in all theory classes, I was active, engaged, worked with other students together, asked gazillion questions and it was still difficult for me. You know what? It was 10times harder for people with a disability ! But I never, ever had any idea, I was looking everything from my perspective and this is not how you build a community or a society. You need empathy and understanding. He also had some bitterness, it was not fair to him. Understandable but he also was angry with the system. With the systematic exclusion. He had a bad mouth for their peer students and our teachers. He had enough. Some times he wanted to break everything in the lab, instead of fixing it. From time to time, he was depressed, angry but a few times he was also happy. When he worked in the lab, he put his soul in his work but the majority of people didn’t expect much of him and he -sometimes- he would not even try much. This was his life. People saw him as a person with a disability, and treat him like that. He was also proud. Reaching a goal, passing a course, achieving something, overcoming his disability against all odds. One day, I gather my entire courage to ask him, point blank a very stupid question:
I will never forget his answer. Till this day, I am still thinking about what he said to me.
Lost every word and almost broke into tears. Tag: story Network Namespaces - Part Three Previously on … Network Namespaces - Part Two we provided internet access to the namespace, enabled a different DNS than our system and run a graphical application (xterm/firefox) from within. The scope of this article is to run vpn service from this namespace. We will run a vpn-client and try to enable firewall rules inside. dsvpnMy VPN choice of preference is dsvpn and you can read in the below blog post, how to setup it.
dsvpn is a TCP, point-to-point VPN, using a symmetric key. The instructions in this article will give you an understanding how to run a different vpn service. Find your external IPBefore running the vpn client, let’s see what is our current external IP address
14
34 The above IP is an example. IP address and route of the namespace
15
35
16
36 FirefoxOpen firefox (see part-two) and visit
17 we noticed see that the location of our IP is based in Athens, Greece.
18 Run VPN clientWe have the symmetric key dsvpn.key and we know the VPN server’s IP.
19
37 HostWe can not see this tunnel vpn interface from our host machine
38 netnsbut it exists inside the namespace, we can see tun0 interface here
20
39 Find your external IP againChecking your external internet IP from within the namespace
14
40 Firefox netnsrunning again firefox, we will noticed that our the location of our IP is based in Helsinki (vpn server’s location).
18 systemdWe can wrap the dsvpn client under a systemcd service
41 Start systemd service
23 Verify
24
42
25
43 FirewallWe can also have different firewall policies for each namespace outside
44 inside
26
45 So for the VPN service running inside the namespace, we can REJECT every network traffic, except the traffic towards our VPN server and of course the veth interface (point-to-point) to our host machine. iptable rulesEnter the namespace inside
27 Beforeverify that iptables rules are clear
28
46 Enable firewall
29 The content of this file
47 After
28
48 PS: We reject tcp/udp traffic (last 2 linew), but allow icmp (ping). End of part three. Tag: linux, namespaces, network, ip-netns, veth, iproute2 Network Namespaces - Part Two Previously on… Network Namespaces - Part One we discussed how to create an isolated network namespace and use a veth interfaces to talk between the host system and the namespace. In this article we continue our story and we will try to connect that namespace to the internet. recap previous commands
49 Access namespace
27
50 Ping VethIt’s not a gateway, this is a point-to-point connection.
51 Ping internettrying to access anything else …
52
53 exit from namespace. GatewayWe need to define a gateway route from within the namespace
32
54 test connectivity - systemwe can reach the host system, but we can not visit anything else
55 ForwardWhat is the issue here ? We added a default route to the network namespace. Traffic will start from our v-ebal (veth interface inside the namespace), goes to the v-eth0 (veth interface to our system) and then … then nothing. The eth0 receive the network packages but does not know what to do with them. We need to create a forward rule to our host, so the eth0 network interface will know to forward traffic from the namespace to the next hop.
56 or
57 permanent forwardIf we need to permanent tell the eth0 to always forward traffic, then we need to edit /etc/sysctl.conf and add below line:
58 To enable this option without reboot our system
59 verify
60 Masqueradebut if we test again, we will notice that nothing happened. Actually something indeed happened but not what we expected. At this moment, eth0 knows how to forward network packages to the next hope (perhaps next hope is the router or internet gateway) but next hop will get a package from an unknown network. Remember that our internal network, is 10.10.10.20 with a point-to-point connection to 10.10.10.10. So there is no way for network 192.168.122.0/24 to know how to talk to 10.0.0.0/8. We have to Masquerade all packages that come from 10.0.0.0/8 and the easiest way to do this if via iptables. Using the postrouting nat table. That means the outgoing traffic with source 10.0.0.0/8 will have a mask, will pretend to be from 192.168.122.80 (eth0) before going to the next hop (gateway).
61
62 Test connectivitytest again the namespace connectivity
63 success DNSalmost! If you carefully noticed above, ping on the IP works. But no with name resolution. netns - resolvReading ip-netns manual
64 we can create a resolver configuration file on this location:
33
65 I am using radicalDNS for this namespace. Verify DNS
66 Connect to the namespace
27
67 bonus - run firefox from within namespacextermstart with something simple first, like xterm
35 or
68 test firefoxtrying to run firefox within this namespace, will produce an error
69 and xauth info will inform us, that the current Xauthority file is owned by our local user.
70 okay, get inside this namespace
27 and provide a new authority file for firefox
37
71 xhostxhost provide access control to the Xorg graphical server. By default should look like this:
72 We can also disable access control
38 but what we need to do, is to disable access control on local
39 firefoxand if we do all that
18 End of part two Tag: linux, namespaces, network, ip-netns, veth, iproute2 Network Namespaces - Part One Have you ever wondered how containers work on the network level? How they isolate resources and network access? Linux namespaces is the magic behind all these and in this blog post, I will try to explain how to setup your own private, isolated network stack on your linux box. notes based on ubuntu:20.04, root access. current setupOur current setup is similar to this List ethernet cards
41
73 List routing table
42
74 Checking internet access and dns
43
75 linux network namespace managementIn this article we will use the below programs:
so, let us start working with network namespaces. listTo view the network namespaces, we can type:
76 This will return nothing, an empty list. helpSo quicly view the help of ip-netns
77 monitorTo monitor in real time any changes, we can open a new terminal and type:
44 Add a new namespace
45 List namespaces
46
78 We have one namespace Delete Namespace
47 Full example
79 monitor
80 DirectoryWhen we create a new network namespace, it creates an object under
48.
81 execWe can run commands inside a namespace. eg.
49
82 bashwe can also open a shell inside the namespace and run commands throught the shell. eg.
83 as you can see, the namespace is isolated from our system. It has only one local interface and nothing else. We can bring up the loopback interface up
84 vethThe veth devices are virtual Ethernet devices. They can act as tunnels between network namespaces to create a bridge to a physical network device in another namespace, but can also be used as standalone network devices. Think Veth as a physical cable that connects two different computers. Every veth is the end of this cable. So we need 2 virtual interfaces to connect our system and the new namespace.
50 eg.
85 Attach veth0 to namespaceNow we are going to move one virtual interface (one end of the cable) to the new network namespace
51 we will see that the interface is not showing on our system
86 inside the namespace
87 Connect the two virtual interfacesoutside
52
88 inside
53
89 Both Interfaces are down!But both interfaces are down, now we need to set up both interfaces: outside
54
90 inside
55
91 did it worked?Let’s first see our routing table: outside
92 inside
93 Ping !outside
94 inside
95 It worked !! End of part one. Tag: linux, namespaces, network, ip-netns, veth, iproute2 cloudflared as a doh client with libredns Cloudflare has released an Argo Tunnel client named: cloudflared. It’s also a DNS over HTTPS (DoH) client and in this blog post, I will describe how to use cloudflared with LibreDNS, a public encrypted DNS service that people can use to maintain the secrecy of their DNS traffic, but also circumvent censorship. Notes based on ubuntu 20.04, as root Download and install latest stable version
96 check version
97 doh support
98 LibreDNS EndpointsLibreDNS has two endpoints:
The latest blocks trackers/ads etc. standaloneWe can use cloudflared as standalone for testing, here is on a non standard TCP port:
99
00 Testing ads endpoint
01 confWe have verified that cloudflared works with libredns, so let us create a configuration file. By default, cloudflared is trying to find one of the below files (replace root with your user):
The most promising file is:
Create the configuration file
02 or for ads endpoint
03 Testing
04
05 Serviceif you are a use of Argo Tunnel and you have a cloudflare account, then you login and get your cert.pem key. Then (and only then) you can install cloudflared as a service by:
06 and you can use
56 or
57 and must have two files:
That’s it ! Run your CI test with GitLab-Runner on your system GitLab is a truly wonderful devops platform. It has a complete CI/CD toolchain, it’s opensource (GitLab Community Edition) and it can also be self-hosted. One of its greatest feature are the GitLab Runner that are used in the CI/CD pipelines. The GitLab Runner is also an opensource project written in Go and handles CI jobs of a pipeline. GitLab Runner implements Executors to run the continuous integration builds for different scenarios and the most used of them is the docker executor, although nowadays most of sysadmins are migrating to kubernetes executors. I have a few personal projects in GitLab under
58 but I would like to run GitLab Runner local on my system for testing purposes. GitLab Runner has to register to a GitLab instance, but I do not want to install the entire GitLab application. I want to use the docker executor and run my CI tests local. Here are my notes on how to run GitLab Runner with the docker executor. No root access needed as long as your user is in the docker group. To give a sense of what this blog post is, the below image will act as reference. GitLab RunnerThe docker executor comes in two flavors:
In this blog post, I will use the ubuntu flavor. Get the latest ubuntu docker image
07 Verify
08 exec helpWe are going to use the exec command to spin up the docker executor. With exec we will not need to register with a token.
09 Git Repo - tmuxNow we need to download the git repo, we would like to test. Inside the repo, a .gitlab-ci.yml file should exist. The gitlab-ci file describes the CI pipeline, with all the stages and jobs. In this blog post, I will use a simple repo that builds the latest version of tmux for centos6 & centos7.
10 Docker In DockerThe docker executor will spawn the GitLab Runner. GitLab Runner needs to communicate with our local docker service to spawn the CentOS docker image and to run the CI job. So we need to pass the docker socket from our local docker service to GitLab Runner docker container. To test dind (docker-in-docker) we can try one of the below commands:
11 or
12 LimitationsThere are some limitations of gitlab-runner exec. We can not run stages and we can not download artifacts.
JobsSo we have to adapt. As we can not run stages, we will tell gitlab-runner exec to run one specific job. In the tmux repo, the build-centos-6 is the build job for centos6 and the build-centos-7 for centos7. ArtifactsGitLab Runner will use the /builds as the build directory. We need to mount this directory as read-write to a local directory to get the artifact.
13 The docker executor has many docker options, there are options to setup a different cache directory. To see all the docker options type:
14 Bash ScriptWe can put everything from above to a bash script. The bash script will mount our current git project directory to the gitlab-runner, then with the help of dind it will spin up the centos docker container, passing our code and gitlab-ci file, run the CI job and then save the artifacts under /builds.
15 That’s it. You can try with your own gitlab repos, but dont forget to edit the gitlab-ci file accordingly, if needed. Full example outputLast, but not least, here is the entire walkthrough
16
17 artifactsand here is the tmux-3.1-1.el6.x86_64.rpm
18 docker processesif we run
59 from another terminal, we see something like this:
19 Tag: gitlab, gitlab-runner, tmux, centos6, centos7, docker, dind Upgrading from Ubuntu 18.04 LTS to Ubuntu 20.04 LTS Server Edition disclaimer: at this moment there is not an “official” server version of an 20.04 LTS available, so we we will use the development 20.04 release. MaintenanceIf this is a production server, do not forget to inform customers/users/clients that this machine is under maintenance before you start. backupWhen was the last time you took a backup? Now is a good time. Try to verify your backup otherwise do not proceed. Update you current systemBefore continue with the dist upgrade to 20.04 LTS, we need to update & upgrade our current LTS version. Login to your system: ~> ssh ubuntu1804
20 reboot is necessary. update
21 upgrade
22 reboot
23 Do release upgrade
24 help
25
26 do-release-upgrade
27
28 server
29
30 at this moment, we will switch to a gnu/screen session
31 Press: y
32 Press Enter update repos
33 …
34 Press y (or review by pressing d ) Fetching packages
35 servicesat some point a question will pop:
I answered Yes but you should answer this the way you prefer. patience is a virtueGet a coffee or tea. Read a magazine. Patience is a virtue till you see a jumping animal. resolved
36 I answered this Y, I will change it later. vimsame here
37 ssh confRemove obsolete packagesand finally
38 Press y to continue Restartare you ready to restart your machine ?
39 Press y to restart LTS 20.04
40
41 Tag: ubuntu, 18.04, 20.04, LTS Using LibreDNS with dnscrypt-proxy Using DNS over HTTPS aka DoH is fairly easy with the latest version of firefox. To use libredns is just a few settings in your browser, see here. In libredns’ site, there are also instructions for DNS over TLS aka DoT. In this blog post, I am going to present how to use dnscrypt-proxy as a local dns proxy resolver using DoH the LibreDNS noAds (tracking) endpoint. With this setup, your entire operating system can use this endpoint for everything. Disclaimer: This blog post is about dnscrypt-proxy version 2. dnscrypt-proxydnscrypt-proxy 2 - A flexible DNS proxy, with support for modern encrypted DNS protocols such as DNSCrypt v2, DNS-over-HTTPS and Anonymized DNSCrypt. Installation
42 Verify Package
43 Disable systemd-resolvedif necessary
44 ConfigurationIt is time to configure dnscrypt-proxy to use libredns
45 In the top of the file, there is a server_names section
46 Resolv ConfWe can now change our resolv.conf to use our local IP address.
47
48 Systemdstart & enable dnscrypt service
49
50 Verify
51
52 Digasking our local dns (proxy)
53
54 That’s it ! Yoursystem is now using LibreDNS DoH noads endpoint. Manual StepsIf your operating system does not yet support dnscrypt-proxy-2 then: Latest versionYou can always download the latest version from github: To view the files
55 To extrace the files
56 Prepare the configuration
57 In the top of the file, there is a server_names section
46
59 Run as root
60 Check DNSInteresting enough, first time is 250ms , second time is zero!
61 That’s it Tag: LibreDNS, dnscrypt-proxy Tools I use daily the Win10 edition almost three (3) years ago I wrote an article about the Tools I use daily. But for the last 18 months (or so), I am partial using windows 10 due to my new job role, thus I would like to write an updated version on that article. I’ ll try to use the same structure for comparison as the previous article, keep in mind this a nine to five setup (work related). So here it goes. NOTICE beer is just for decor ;) Operating SystemI use Win10 as my primary operating system in my worklaptop. I have a couple of impediments that can not work on a linux distribution but I am not going to bother you with them (it’s webex and some internal internet-explorer only sites). We used to use webex as our primary communication tool. We are sharing our screen and have our video camera on, so that everybody can see each other.Working with remote teams, it’s kind of nice to see the faces of your coworkers. A lot of meetings are integrated with the company’s outlook. I use OWA (webmail) as an alternative but in fact it is still difficult to use both of them with a linux desktop. We successful switched to slack for text communications, video calls and screen sharing. This choice gave us a boost in productivity as we are now daily using slack calls to align with each other. Although still webex is in the mix. Company is now using a newer webex version that works even better with browser support so that is a plus. It’s not always easy to get everybody with a webex license but as long as we are using slack it is okay. Only problem with slack in linux is when working with multiple monitors, you can not choose which monitor to share. I have considered to use a VM (virtual machine) but a win10 vm needs more than 4G of RAM and a couple of CPUs just to boot up. In that case, it means that I have to reduce my work laptop resources for half the day, every day. So for the time being I am staying with Win10 as the primary operating system. I have to use the winVM for some other internal works but it is limited time. DesktopDefault Win10 desktop I daily use these OpenSource Tools:
and from time to time, I also use:
except plumb, everything else is opensource! So I am trying to have the same user desktop experience as in my Linux desktop, like my language swith is capslock (authotkey), I dont even think about it. Disk / FilesystemDefault Win10 filesystem with bitlocker. Every HW change will lock the entire system. In the past this happened twice with a windows firmware device upgrade. Twice! Dropbox as a cloud sync software, with EncFSMP partition and syncthing for secure personal syncing files. (same setup as linux, except bitlocker is luks) OWA for calendar purposes and … still Thunderbird for primary reading mails. Thunderbird 68.6.0 AddOns:
(same setup as linux) ShellWindows Subsystem for Linux aka WSL … waiting for the official WSLv2 ! This is a huge HUGE upgrade for windows. I have setup an Arch Linux WSL environment to continue work on a linux environment, I mean bash. I use my WSL archlinux as a jumphost to my VMs. Terminal Emulator
EditorUsing Visual Studio Code for scripting. vim within WSL and notepad for temporary text notes. I have switched to Boostnote for markdown and as my primary note editor. (same setup as linux) BrowserMultiple Instances of Firefox, Chromium, Tor Browser and brave Primary Browser: Firefox Primary Private Browsing: Brave (same setup as linux) CommunicationI use mostly Slack and Signal Desktop. We are using webex but I prefer Zoom. Riot/Matrix for decentralized groups and IRC bridge. To be honest, I also use Viber & messanger (only through webbrowser). (same setup as linux - minus the Viber client) MediaVLC for windows, what else ? Also GIMP for image editing. I have switched to Spotify for music and draw io for diagrams. Last, I use CPod for podcasts. Netflix (sometimes). (same setup as linux) In conclusionI have switched to a majority of electron applications. I use the same applications on my Linux boxes. Encrypted notes on boostnote, synced over syncthing. Same browsers, same bash/shell, the only thing I dont have on my linux boxes are webex and outlook. Consider everything else, I think it is a decent setup across every distro. Thanks for reading my post. Tag: win10 restic with minio restic is a fast, secure & efficient backup program. I wanted to test restic for some time now. It is a go backup solution, I would say similar to rclone but it has a unique/different design. I prefer having an isolated clean environment when testing software, so I usually go with a VΜ. For this case, I installed elementary OS v5.1, an ubuntu LTS based distro focus on user experience. As backup storage solution, I used MinIO an S3 compatible object storage on the same VM. So here are my notes on restic and in the end of this article you will find how I setup minion. Be aware this is a technical post! resticMost probably your distro package manager has already restic in their repositories.
62 download latest versionBut just in case you want to install the latest binary version, you can use this command
63 or if you are already root
64 we can see the latest version
65 autocompletionEnable autocompletion
66 restart your shell. Prepare your repoWe need to prepare our destination repository. This is our backup endpoint. restic can save multiple snapshots for multiple hosts on the same endpoint (repo). Apart from the files stored within the keys directory, all files are encrypted with AES-256 in counter mode (CTR). The integrity of the encrypted data is secured by a Poly1305-AES message authentication code (sometimes also referred to as a “signature”). To access a restic repo, we need a key. We will use this key as password (or passphrase) and it is really important NOT to lose this key. For automated backups (or scripts) we can use the environmental variables of our SHELL to export the password. It is best to export the password through a script or even better through a password file.
67 eg.
68 We can also declare the restic repository through an environmental variable
69 Local RepoAn example of local backup repo should be something like this:
70 minio S3We are going to use minio as an S3 object storage, so we need to export the Access & Sercet Key in a similar way as for amazon S3.
71
72 The S3 endpoint is
60 so a full example should be:
73 source the config file into your shell:
74 Initialize RepoWe are ready to initialise the remote repo
75 Be Careful if you asked to type a password, that means that you did not use a shell environmental variable to export a password. That is fine, but only if that was your purpose. Then you will see something like that:
76 backupWe are ready to take our first snapshot.
77 You can exclude or include files with restic, but I will not get into this right now. For more info, read Restic Documentation standard inputrestic can also take for backup:
78 Check
79 Take another snapshot
80 List snapshots
81 Remove snapshotas you can see, I had one more snapshot before my home dir and I want to remove it
82 list again
83 Compare snapshots
84 Mount a snapshot
85 open another terminal
86 So as we can see, snapshots are based on time.
87 be aware as far as we have mounted the restic backup, there is a lock on the repo. Do NOT forget to close the mount point when finished.
88 Check againyou may need to re-check to see if there is a lock on the repo
89 Restore a snapshotIdentify which snapshot you want to restore
90 create a folder and restore the snapshot
91
92 List files from snapshot
93 keys
94 restic rotate snapshot policya few more words about forget Forget mode has a feature of keep last
61 snapshots, where
62 can be
and makes restic with local feature an ideally replacement for rsnapshot!
95 Appendix - minioMinIO is a s3 compatible object storage. install server
96 run server
97
98 browsercreate demo bucketinstall client
99 configure client
00 run mc client
01 mc autocompletion
02 you need to restart your shell.
03 That’s It! Tag: restic, minio The story of my first job in Tech Industry The other day I was thinking about my first ever job in this industry as a junior software engineer at the age of 20. I was doing okay with my studies at the Athens university of applied sciences but I was working outside of this industry. I had to gain some working experience in the field, so I made a decision to find part time work in a small software house. The (bad) experience and lessons learned in those couple weeks are still with me till this day … almost 20 years after! IntroductionsI got a flyer from the job board at school and I walked a couple of kilometers to the address of the place. I didn’t have a car back then (or for the next 7 years), so I had to use public transportation (bus) or walk wherever I wanted to go. I rang the doorbell around noon and went up on the second floor. There I introduced myself and asked for an opportunity to work with them. The owner/head of software team asked me a few things and got to the technical parts of the job.
Impressed, that I was going to work with the next amazon, I immediately said Yes to the offer. HTML4
He smiled at me and gave me (I think) this 800 pages book to read about HTML4. HTML-4-Bible-Bryan-Pfaffenberger He then told me:
That was Friday noon. I spent 10 hours quickly reading the book and keeping notes. Then I made a static demo site about Milos Island, where I had spent two weeks in the summer with my girlfriend. I had photos and material to write about, so I did that as an exercise. Monday morning, I was presenting him with my homework. He didn’t believe me and spent a couple of hours talking about HTML4, just to prove that I had made the site, reading the book he gave me. In the end he was convinced. Visual StudioMy next assignment was to learn about Visual Basic and Visual Studio. I had a basic idea about this but I had never worked as a professional programmer, so he prepared a few coding exercises to get familiar with the codebase. This was my onboarding period.
Next day, I was again first in the office.
Next day … I was back in the office.
Read it, returned the next day.
Next two days, worked there on coding exercises to get familiar with their codebase. He was impressed and I was very happy. QANext day (Friday):
I took this task as my personal goal to prove myself. Worked ten hours that day and made a few comments on how to improve customer experience. I asked if I can take the CD back with me at home and tested it on my personal computer. It was a windows executable and the installer was pretty decent. Next, next, install, done. My windows 98 second edition didn’t have enough free space on my hard disk, and I needed to also install oracle to work on my semester lab exercises. My 8G hard disk and the gazillion of floppy disks around my home office on my Pentium III was my entire kingdom back then. So I uninstalled the application and rebooted my computer. Then something horrible happened. My computer could not start the operating system. There were indications of missing DLLs. I re-installed (repair) windows and was curious about what happened. I re-installed the application and re-uninstalled it once more. Reboot Windows and again missing DLLs. First ConflictI returned on Monday morning at the office and explained in details the extreme bug I had found. When a customer removes our software, they would corrupt their operating system. The majority of our customers didn’t have the technical experience to fix this problem. So I made it very clear that this is something we need to fix ASAP and we should inform every customer not to remove our application and reboot their machine. I was really proud that I had found this super bug and that we were going to save our company. And then the owner told me:
Whatttt ? First business lesson was:
Fixing BugsThe next thing was to check the installer. We’ve noticed that they had marked a few windows DLLs as important to be there for our application to run. To avoid any mistakes we copied these DLLs from the application’s CD to our customer’s windows. The uninstallation process, was removing everything that installed so … the windows DLLs were gone! It was a simple mistake and easy to fix. Click on the correct checkbox for those files, not to be removed during the uninstallation process. DistributionWe needed to distribute our application to all 2.000 customers all over Greece. We had to burn 2.000 physical CD’s, print 2.000 CD covers, compile 2.000 CD cases and put them in 2.000 envelopes and write 2.000 addresses on the envelopes. Then visit the local post office, pay for stamps etc and mail 2.000 CDs to our customer’s snail addresses. We also had to provide letters of instructions:
in any circumstance do not reboot your PC till the new version is up and running. Then copy your license key into the program and connect to the internet to upload your contracts/data or sync your data from the central database to your laptop/desktop. MoneyFor every patch (that meant a new CD to sent) our business model was to get money from our customers for our work and any expenses for distributing these CDs around Greece. That was the business deal with our customers. Customers were paying us, for our mistakes and could also take a week or so to get the fix. Depending on the post office delays. License keys were valid (I am not sure but I believe) for a year and then there was a subscription model for the patches. If customers wanted to subscribe. then they should pay us for every CD, for every patch, for every mistake. Our business model depended on that. Second ConflictFor some reasons I had opinions about this effort. I made a suggestion to use our web server (web site) to provide the patch, so the customers can download from the internet and install it immediately without waiting for weeks till we sent the next CD with the latest version. Also ,no need of extra money for the post office or CDs or burning 2.000 CDs through the weekend. Customers should pay for the patch (our work) so this way would be best for everybody. The owner replied to me, that they made more money with the current system, so no need of making things easier or cheaper for customers and I should keep this innovated ideas to myself. At that point, the thought that I wasn’t working for the next amazon came in mind. They would put this extra profit on top of their customer’s needs. Coding styleFinally, after my first week as an employee, I was now writing code as a software engineer. I did an impressive work of fixing bugs and refactoring code and in a sense made our product better, faster and safer. I had ideas and worked closely with the senior programmer on a few things. I was doing good, working fast, learning and providing value. I’ve noticed a specific coding style so I kept it. The senior programmer could read my code and comments (I wrote a lot of comments) and vice versa. Finally I had joy from my work as a programmer. Third ConflictI vividly remember a specific coding issue, even 20 years after it happened. There was a form with 10 buttons. 10 clicks were the maximum possible events on this form. So I wrote a case statement of 9 events and one default. I submitting the code and the owner/head software programmer came to the office yelling at me.
Final Discussionafter a couple of hours
The truth bomb:
ExitTwo weeks, I felt like really shit. I felt like I didn’t know anything about business but he paid me for the whole month. After all these years, I now believe that he was afraid of my ideas. Of using the internet to help our business and reduce customer’s costs but the most important was he was afraid that new people came to his business and wrote code that he could not understand. I made a promise that day to myself, that last Friday from my very first job:
Almost 20 years have past from those two weeks, I never worked as a programmer, I chose to work as a sysadmin, mostly doing operations. Thankfully I think I am doing well. So here, to the next 20 years ahead. Thank you for reading my story. The importance of culture Origin Post on LinkedIn, Published on January 6, 2020 Being abroad in Japan the last couple weeks, I’ve noticed that the high efficiency -from crossing roads to almost everything- they do (cooking/public transportation/etc) is due to the fact of using small queues for every step of the process. Reaching to a maximum throughout with small effort. The culture of small batches/queues reminds me the core principles of DevOps as they have identified in the book “The Goal: A Process of Ongoing Improvement” by Eli Goldratt and of course in “Theory of Constraints”.Imagine this culture to everything you do in your life. From work to your personal life. Reducing any unnecessary extra cost, reducing waste by performing Kata. Kata is about form, from dancing to creating your cloud infrastructure with reproducible daily work or routines that are focusing in the process for reaching your business goals. This truly impresses me in Japanese culture among with the respect they are showing to each other. You may of course notice the young people riding their bicycles in the middle of the street, watching their smartphone instead of the road 😀but the majority of people bow their head to show respect to other people and other people’s work or service. We, sometimes forget this simple rule in our work. Sometimes the pressure, the deadlines or the plethora of open tickets in our Jira board (or boards) makes us cranky with our colleagues. We forget to show our respect to other people work. We forget that we need each other for reaching to our business values as a team. We forget to have fun and joy. To be productive is not about closing tickets is about using your creativity to solve problems or provide a new or improve an old feature that can make your customers happy. Is about the feedback you will get from your customers and colleagues, is about the respect to your work. Is about being happy. For the first time in my life, I took almost 30days out of work, to relax, to detox (not having a laptop with me) to spend some time with family and friends. To be happy. So if any colleague from work is reading this article: |