The easiest way install docker on ubuntu 13.10

This is how you can install docker lxc inside 13.10 with no pain

copy this commands and put it into one file and set it as executed

1. Copy all this command as one file, sample : install_docker.13.10.sh


sudo apt-get -y update
sudo apt-get -y autoremove
sudo apt-get install linux-image-extra-`uname -r`
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
sudo sed -i '/localhost/a 127.0.0.1 '"$(echo `hostname`)"'' /etc/hosts
sudo sh -c "echo deb http://get.docker.io/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
sudo apt-get -y update
sudo apt-get -y install lxc-docker

2. after save it as install_docker.13.10.sh as sample, let’s make it executeable


#chmod +x install_docker.13.10.sh

3. Execute this will add 127.0.0.1 record into your /etc/hosts to prevent error when add docker.list into /etc/sources.list.d


#sh install_docker.13.10.sh

simply just comment if you found an error regarding this dummy installation scripts.

Simple route throw exception

This is only log, how to throw exception error into laravel framework using routes.php


#sudo vim routes.php

add this in that file

Route::get('you_have_been_compromised', function() {
throw new \Exception("This is the error goes.");
});

i did this because i need to simulate the error, and make sure error is not write to instances files anymore but it’s stream out directly to log server. The result will be vary but one thing that you need to know that you will save a lot hard disk space in your instance and you don’t have to manage logrotate and you can combine the sumologic or any alert agent only in your log server don’t have to put the error agent in each of your instances.

Credits to my friend ‘sky akbar ko’

Error sudo env ARCHFLAGS=”-arch x86_64″ gem install pg

This error show because there is no ruby-dev on your Operating System, to fix this just install ruby-dev


awan@google.com:~/home/awan$ sudo apt-get install ruby-dev


Building native extensions. This could take a while...
ERROR: Error installing pg:
ERROR: Failed to build gem native extension.

/usr/bin/ruby1.9.1 extconf.rb
/usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require': cannot load such file -- mkmf (LoadError)
from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
from extconf.rb:2:in `'

Gem files will remain installed in /var/lib/gems/1.9.1/gems/pg-0.17.1 for inspection.
Results logged to /var/lib/gems/1.9.1/gems/pg-0.17.1/ext/gem_make.out

and if this below error show, then execute this


awan@google.com:~/home/awan$ sudo apt-get install libpq-dev


Building native extensions. This could take a while...
ERROR: Error installing pg:
ERROR: Failed to build gem native extension.

/usr/bin/ruby1.9.1 extconf.rb
checking for pg_config... yes
Using config values from /usr/bin/pg_config
You need to install postgresql-server-dev-X.Y for building a server-side extension or libpq-dev for building a client-side application.
You need to install postgresql-server-dev-X.Y for building a server-side extension or libpq-dev for building a client-side application.
checking for libpq-fe.h... no
Can't find the 'libpq-fe.h header
*** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of
necessary libraries and/or headers. Check the mkmf.log file for more
details. You may need configuration options.

Provided configuration options:
--with-opt-dir
--without-opt-dir
--with-opt-include
--without-opt-include=${opt-dir}/include
--with-opt-lib
--without-opt-lib=${opt-dir}/lib
--with-make-prog
--without-make-prog
--srcdir=.
--curdir
--ruby=/usr/bin/ruby1.9.1
--with-pg
--without-pg
--with-pg-config
--without-pg-config
--with-pg_config
--without-pg_config
--with-pg-dir
--without-pg-dir
--with-pg-include
--without-pg-include=${pg-dir}/include
--with-pg-lib
--without-pg-lib=${pg-dir}/lib

Gem files will remain installed in /var/lib/gems/1.9.1/gems/pg-0.17.1 for inspection.
Results logged to /var/lib/gems/1.9.1/gems/pg-0.17.1/ext/gem_make.out

install ansible the easiest way

Cara termudah menginstall ansible adalah mengcopy file dibawah ini ke sebuah file lalu eksekusi file

1. save this scripts to a file, sample name : google.ansible.sh


#----------- start code ------------
#!/bin/bash
sudo apt-get -y install python-pip python-dev
sudo pip install -U boto
sudo pip install -U https://github.com/ansible/ansible/archive/devel.zip
ansible --version
#----------- end code ------------

2. chmod +x to that file
~#sudo chmod +x google.ansible.sh

3. execute this file
~#sudo sh google.ansible.sh

Untuk pengguna centos 7 bisa menggunakan scripts dibawah ini

1. simpan code dibawah ini sebagai script bash : google.ansible.sh


#----------- start code ------------
#!/bin/bash
sudo yum -y install epel-release
sudo yum install -y gcc python-pip python-devel
sudo pip install -U boto
sudo pip install -U https://github.com/ansible/ansible/archive/devel.zip
ansible --version
#----------- end code ------------

2. ganti permisi file biar dapat di eksekusi

~#chmod +x ./google.ansible.sh

3. eksekusi file installasi

~#sh ./googe.ansible.sh

Create your own .pem for ssh login using certificate

Some of my colleagues asking me this, how can i login to my server using certificate like i login to aws (amazon web services).

1. Setup your server


awan@google.com# ssh-keygen -t rsa -b 2048
awan@google.com# cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

2. Copy your secrets to your a file let’s give it a name ‘awan.access.pem’, you can give it any name you want


awan@local_machine.dev.google.com# scp awan@google.com:~/.ssh/id_rsa .
awan@local_machine.dev.google.com# cp id_rsa awan.access.pem
awan@local_machine.dev.google.com# chmod 0600 awan.access.pem

or just cat the id_rsa file, copy the content to a file and rename the file

3. Let’s access our box using .pem certificate file from any box, please don’t share the key to unauthorized personel


awan@local_machine.dev.google.com# ssh -i awan.access.pem awan@google.com

awan : change is to your username
google.com : this is also an example, change this to your server ip, linux box, or your domain name

after step number 2 your can create image or snapshot from the virtual machine or container so next time you login you don’t have to regenerate the key anymore.

Fixing Ubuntu 14.04 error when compiling barnyard error: possibly undefined macro: AC_PROG_LIBTOOL

this is the error

addhe@google:~/barnyard$ autoreconf -fvi
autoreconf: Entering directory `.'
autoreconf: configure.in: not using Gettext
autoreconf: running: aclocal --force -I m4
aclocal: warning: autoconf input should be named 'configure.ac', not 'configure.in'
autoreconf: configure.in: tracing
autoreconf: configure.in: not using Libtool
autoreconf: running: /usr/bin/autoconf --force
configure.in:27: error: possibly undefined macro: AC_PROG_LIBTOOL
If this token and others are legitimate, please use m4_pattern_allow.
See the Autoconf documentation.
autoreconf: /usr/bin/autoconf failed with exit status: 1

how to fix this ? just simply using this command


#sudo apt-get install libtool

if you’re install IDC snort like me just make sure this package already installed, it this below package already installed then you will not face error above.


#apt-get -y install libwww-perl libnet1 libnet1-dev libpcre3 libpcre3-dev autoconf libcrypt-ssleay-perl libtool libssl-dev build-essential automake gcc make flex bison

Terminate all query on postgresql with one command line

Please don’t use this if you do not know what exactly gonna happen.

format :
psql -U{USERNAME} -h{HOSTNAME} {DATABASE_NAME} -c "select pid from pg_stat_activity" -t | xargs -n1 -I {} psql -c "SELECT pg_terminate_backend({})"

change the variable with that suits to your configuration, i put sampel below

sample :
psql -Ugoogleadmin -hgoogle_com google_production_db -c "select pid from pg_stat_activity" -t | xargs -n1 -I {} psql -c "SELECT pg_terminate_backend({})"

Using sdiff properly

as many people using sdiff in the wrong way, most of people just want the diffrence. if you have only 100 or 200 line it’s fine. what happen if you have 100000 lines compare to 100000 lines.

#FILE_TES1=yourfile.log
#FILE_TES2=yourfile2.log

#sdiff -bBWs $FILE_TES1 $FILE_TES2

this will output only the diffrence

simulate cpu load for your server

This scripts will detect how much cpu processor on your server and simulate “fake” load, don’t do this on production server as this will blow up your entire system. ‘cpu’ is just a variable


cpu_total=`cat /proc/cpuinfo | grep processor | wc -l`
for cpu in $(seq 1 $cpu_total)
do
( while true; do true; done ) &
done

how to filter kibana for last 24 hours

#this is what you will face when you using kibana, you want to filter kibana for only show latest data 24hour in your panel.

once you installed kibana and make’s your data live here’s the step to make your panel 24 hours from now

1. open your kibana url something like

http://kibana.awan.google.com/#/dashboard

2. put a sample filter in your kibana dashboard

3. save your dasboard

4. export your dashboard as json file, it will download file json into your PC

5. edit your json file (the one that you download already), there is should be something like this if you already have sample filter in step 2


"filter": {
"idQueue": [
1,
2
],
"list": {
"0": {
"type": "time",
"from": "now-24h",
"to": "now",
"field": "@timestamp",
"mandate": "must",
"active": true,
"alias": "",
"id": 0
}
},
"ids": [
0
]
}
},

6. make sure your filter have this and save it as “something dashboard”

7. click load and pick your “something dashboard”

your filter panel should show last 24 hour from now in real time mode. congratulation