Dynamic Excel Files via Python.

by

In a website, I saw a request from a user, saying:

I am creating xlsx for my clients based on dynamic information. When I am generating the xlsx file, I am using formula, cell coloring, row border coloring etc for 400 records, which makes the file creation time delay. So currently, its taking around 40 minutes. But I need some solution which would help me to generate it in minutes.

 

Most of the programming languages and scripting shells have support for excel but all of them are not simple to implement and some of them has lack of features and only some portion of Excel features can be simulated. So, I made a quick search for it and noticed that Python has a great module for this kind of jobs : XlsxWriter

 

By looking at features it supports, I liked it and made a sample from its examples: Creating 400 excel files, each file has its file number inside, all files have a data set and all have a total count of columns multiplied by file number.

 

This is the command I used to run the program. It is nice that creating and making basic operations on all excel files took only 2.65s.

 

When I checked the folder, I saw 400 excel files that are really created.

 

And here is the 400th file content. There is also a formula inside! 🙂

 

If you wanna see the code, you may find it in my github repo.

Awk Fun!

by

Awk is a powerful toy linux users have. As explained in wiki

The AWK language is a data-driven scripting language consisting of a set of actions to be taken against streams of textual data – either run directly on files or used as part of a pipeline – for purposes of extracting or transforming text, such as producing formatted reports.

Lately, I have seen some entries in Linkedin that sentences are in binary form like that:

As a funny work, I used awk to easily transform them into human readable sentences. You may find the file here in github and the content is:

In some cases, there were no space between octets. Therefore, an extra step had to be done, parse input according to modulus 8. The new code is something like that more or less (github):

As a result, save source text to a file called source.txt (you may also use echo or cat or sth else to use in pipeline). Save first code as resolve.awk and run it:

If there is no space in source, then save second code as put_space.awk and run it:

Prometheus & Grafana & Netdata

by

Before a while, I had written a post about monitoring single linux system. It was the magic of netdata that was supplying us those good visuals. You can look here.

Netdata is good for basic info about a single system, but what about an enterprise infrastructure, mostly with lots of machines? Let’s broaden the concept some more: It is not only basic data we want to see, but also metrics of enterprise applications such as cache retrieve miss count of Apache web servers, table IO waits of Mysql servers or DIMM status of physical servers. A central solution could be useful for these all. Here is where Prometheus comes into play.

What is Prometheus? As described here

Prometheus is an open-source systems monitoring and alerting toolkit.

And it is a monitoring system and time series database. Have a look at github!
Some applications supply metrics to Prometheus directly and some application metrics are collected by exporters.

When you first install Prometheus, you get a screen similar to this.

If you write something about metric into the expression area or open dropdown list, you see the metrics being collected. You may list both tabular data or graph of the metric. Prometheus provides a functional expression language (PromQL) that lets the user select and aggregate time series data in real time.

I have been monitoring my computer with netdata. I noticed that (Netdata with Prometheus) netdata is able to supply Prometheus metrics without exporters. The only thing I had to do was add these lines to prometheus configuration data:

After having Prometheus page with my netdata metric, I used the sample query and had the graph.

This gives me what i need as a sample but there is no need to restrict ourselves though a great visualization tool exists: Grafana.

Grafana is an open source metric analytics & visualization suite. It is most commonly used for visualizing time series data for infrastructure and application analytics.

I added Prometheus as a data source. Then in a new dashboard, added system_cpu_user as query to a panel. And able to get the graph here:

Grafana offers us a variety of graphs. I added process counts of some users as a graph and gauges.

Grafana officially supports the following data sources:

Graphite
Elasticsearch
CloudWatch
InfluxDB
OpenTSDB
KairosDB
Prometheus

What can be done is rather flexible. If you want to monitor anything in enterprise, if it supports metrics to Prometheus or has exporters, then the rest depends on your imagination. (Of course custom exporters can also be written :))

In this post, I used netdata metrics in Prometheus and then used them in my grafana visual objects. For this working environment, I did not install prometheus and grafana to my computer. Instead, I used docker images prom/prometheus

and grafana/grafana

Good day…

References:

My Ansible Journey

by

For a long time, I wanted to tick one of my most wanted job in my to-do list, but could not. At last, I did “learn ansible”. One of my colleagues had used it and that seemed so interesting and exited to me. However, it is work life, time is not easy to find most of the time.
Anyway, here we are and happy we are! Let’s begin…

Ansible is an open-source automation engine that automates software provisioning, configuration management, and application deployment.

That is how wiki defines ansible. It uses modules and direct command line and after a first time preparation, you may begin to ensure all systems to have identical configurations, folder structures, files, apps installed, etc. But, one of the most important feature of ansible for me is, it is idempotent. Idempotency is defined in http://docs.ansible.com/ansible/glossary.html like that:

An operation is idempotent if the result of performing it once is exactly the same as the result of performing it repeatedly without any intervening actions.

That means, it is not important how many times you run your playbooks or commands. For instance, if you are installing a package, you run and it is installed. The next time you run again, nothing is done because it is already installed. It does not try to reinstall. Nice huh!
Let’s come back to my journey.

  • First of all, I had to learn, so had to read documents. So, I downloaded http://docs.ansible.com/ansible/index.html part of the page to run from local. That is an unnecessary detail, I know.

  • Then, I noticed that I had to install it to somewhere. I did not wanted to install it locally, but had no test server. Docker came to save me! I prepared two base images, one for server and one for client. Details and files are here: https://github.com/sistemcim/docker/tree/ansible

  • After having test server and clients, I run several ad-hoc commands with “command” and “shell” module. Then, I wanted to write my first playbook. But

    Ansible playbook format is rather strict, so it is usual to fail in your first try.

    Anyway, error messages help most of the time and you get used to it after a while.

As you read documents, work on playbooks and begin to understand what you can do, you will get more and more excited. To free your mind, I give you an example from one of my first drafts.

With my web role I create users listed in vars/main.yml, change their passwords, modify pam.d/sshd file to allow them to ssh, install sudo package and gives users sudo permissions.

It is so easy. For instance, you want to install all clients rsyslog package and make sure it starts after installation and after system boot also. Here is the code:
[code gutter=”false”]
– hosts : all
tasks:
– name: install rsyslog package
apt:
name: rsyslog
state: present

– name: start rsyslog
service:
name: rsyslog
state: started
enabled: yes
[/code]

That’s all folks for now. Have a look at my ansible works:
https://github.com/sistemcim/ansible

Enjoy yourself…

List Windows Logs

by

Powershell is really nice and useful. Although it requires a learning cost as most of the other languages, for windows admis it may make the life better and easier.

In the sample below, all windows log file names are printed to the screen. By the way, if working with windows logs search or look automation, check “wevutil.exe” (https://technet.microsoft.com/en-us/library/cc732848(v=ws.11).aspx).

[code language=”powershell”]wevtutil.exe el|%{Write-Host $_}[/code]

Modify Windows Registry via Reg File

by

Every windows system admin may need to work with registry for some purposes. At least, all group policies or most installed programs make modifications in it.
To make changes in registry, “regedit” command is rather helpful. It also opens a gui if run without parameter. However, for bulk or repeating modifications, using a file can be better. Here is how to make use of it :

[code language=”bash”]regedit /S file.reg[/code]

file.reg

For detailed info, please check this link:
https://support.microsoft.com/en-us/help/310516/how-to-add,-modify,-or-delete-registry-subkeys-and-values-by-using-a-.reg-file

Monitor Single Linux System with Netdata

by

If the case is monitoring a single Linux system, netdata comes into play. It both supplies data about lots of crucial system metrics and is also so lightweight. What is more, event web server it is using was rewritten for best performance. Everyone monitoring a system must have a look at https://netdata.firehol.org Thanks to Costa Tsaousis and contributers.

netdata

To install, get files from https://github.com/firehol/netdata/
If github is not accessible -> https://firehol.org/download/netdata/latest/
At the time I was working on this, latest version was 1.4.

[code language=”bash”]
wget https://firehol.org/download/netdata/latest/netdata-1.4.0.tar.gz
tar xvzf netdata-1.4.0.tar.gz
cd netdata-1.4.0/
./netdata-installer.sh
[/code]

Here is the output:

Then, I noticed some prerequisites are missing:

After running the following commands:

[code language=”bash”]
dnf -y install libuuid-devel zlib-devel autoconf gcc make automake
./netdata-installer.sh
[/code]

Enjoy real-time performance and health monitoring…

DHCP Backup & Restore

by

Most organizations use DHCP (Dynamic Host Configuration Protocol) to provide IP addresses to their clients. If organization schema and network segments is not very dynamic, than DHCP server is not needed to be modifed so often and forgotten somewhere out there.

However, there can be times to restore DHCP configuration when server, service or sth else failed. Further, for a backward check for different reasons, especially for security, old DHCP reservations might be needed. At these times, DHCP backups should exist and be available.

Thanks to netsh command, that helps us backup up and restore DHCP configuration. Here it is:

To Backup DHCP
[code language=”bash”]
netsh dhcp server \\servername dump > DHCP-Backup-Server.txt
[/code]

To Restore DHCP
[code language=”bash”]
netsh exec DHCP-Backup-Server.txt
[/code]

Hope you do not need these, but being ready for different scenarios makes feel more comfortable.

Good days…

Windows Remote Computer Log

by

While managing or configuring windows servers from a management server, a log may be needed for servers being managed. For that purpose, following powershell command might be used. In this example, “RemoteComputerName” logs are displayed and filtered according to “Logname” and “EventID”.

[code language=”powershell”]
Get-WinEvent
-ComputerName <RemoteComputerName>

-Logname Microsoft-Windows-GroupPolicy/Operational`
|where{$_.Id -eq 9999}|%{
$_.Message
}
[/code]

DFSR Database Rebuild

by

While using DFS Replication, you may encounter problems or file is not replicated if file is in use. In this situation, we had to replicate files manually several times. We first killed all handles to the file and overwrote the file with the newer one. But this action has a side effect, keep reading.

DFSR holds metadata in a database for every file and folder inside replicated folder. Replication is decided according to these data. If file is touched out of DFSR, this metadata might be broken or out of date and file or folder is not replicated any more via DFSR. Removing file or folder from source folder, cleaning them from target and readding to source folder might recreate the items in destination usually.

But if you have multiple destinations and only one is broken, method above might be an unnecessary burden for whole system and network. What is more, it did not worked for us in some extreme situations. In this case, we had to rebuild DFSR database to solve the problem. Thanks God, it worked!

DFSR database is under “System Volume Information” folder of the partition DFSR target resides. To get into and work on it, it is necessary to be “system” user, that is why we use psexec, to become system user.

So much talk again. Here it is:

[code language=”bash”]
net stop dfsr

psexec -s cmd
cd "System Volume Information"
cd DFSR
move database_XXX database_XXX_old
exit

net start dfsr
[/code]
Hope you do not need this any time!
Good days…