2014-11-24

Netcat

(Up-to-date source of this post.)

TCP/IP swiss army knife. Simple (yet powerful!) Unix utility that reads and writes data across network connections, using TCP or UDP.

Netcat as a Client

Connect to some port of some host:

nc <host> <port>
  • your STDIN is sent to the host
  • anything that comes back across network is sent to your STDOUT
  • this continues indefinitely, until the network side closes (not until EOF on STDIN like many other apps)

Test remote HTTP server:

nc google.com 80
GET / HTTP/1.0

(press Enter two times after the GET line)

Check UDP port is open:

nc -vu ns.nameserver.tld 53

Make sure no data (zero) is sent to the port you connect to:

nc -v -z host.tld 21-25

Change source port / address (ex. to evade a FW):

nc -p 16000 host.tld 22
nc -s 1.2.3.4 host.tld 8181

Netcat as a Server

Listen for an incoming connection on some port:

nc -l <port>

Send a directory over the network:

.. host A (receiving data)

nc -l 1234 | tar xvf -

.. host B (sending data)

tar cf - </some/dir> | nc -w 3 <hostA> 1234

Send a whole partition over the network:

.. host A (receiving data)

nc -l 1234 | dd of=backup_sda1

.. host B (sending data)

dd if=/dev/sda1 | nc -w 3 <hostA> 1234

Run a command (potentially dangerous!); ex. open a shell access:

.. host A (server)

nc -l 9999 -e /bin/bash

.. host B (client)

nc hostA 9999

More

2014-10-31

Shell Completion

(Up-to-date source of this post. First version created 2013-03-12)

Bash (one of the most popular shells) offers a great feature that makes many people's Tab key pretty worn. It completes the names of commands, directories and files you start to write. The complete command (man bash => "Programmable Completion") lets users extend the standard completion fucntion.

Bash Completion

The Bash Completion project offers many many completion rules that you can add to ~/.bashrc. If not already installed:

aptitude install bash-completion

Then you may need to source it from /etc/bashrc or ~/.bashrc:

# Use bash-completion, if available
[[ $PS1 && -f /usr/share/bash-completion/bash_completion ]] && \
    . /usr/share/bash-completion/bash_completion

Debian does this for you via /etc/bash.bashrc.

Try it out by typing:

ssh [TAB]

If you don't get any meaningful results add some of your hosts into ~/.ssh/config and try again:

host login.example.org
host bigserver.example.net

If you have ssh keys deployed one the remote hosts, try out:

scp bigserver.example.net:[TAB]

Bash Completion with Perl

Bash completion project builds on shell scripting which is easy to write but limited. You might prefer to use a more complete language, like Perl.

Perl Programs

If you want your Perl applications to complete their options use Getopt::Complete module:

#!/usr/bin/perl -w
# getopt-complete -- sample self-completing script
use strict;

use Getopt::Complete(
    'tmpdir' => [ "/tmp",     "$ENV{HOME}/temp", "/var/tmp" ],
    'user'   => [ $ENV{USER}, "root" ],
);

Add the following to ~/.bashrc:

function _getopt_complete () {
    COMPREPLY=($( COMP_CWORD=$COMP_CWORD perl `which ${COMP_WORDS[0]}` ${COMP_WORDS[@]:0} ));
}
complete -F _getopt_complete getopt-complete

Then source ~/.bashrc or relogin and run:

./getopt-complete <TAB>

Compiled Programs

In case you want to add the completion functionality to a compiled program you can't rewrite, you have to wrap it into an external helper, like Mike Schilli did in github-helper. If you want to use this script, put it into your PATH and add following to ~/.bashrc:

complete -C github-helper -o default git

The -o default option reverts to shell's completion mechanism if github-helper has nothing to offer.

2014-09-09

Unix Times and Perl

(Up-to-date source of this post.)

Unix filesystem consists of two parts:

  • data blocks - contents of files and directories (special files with inode-name pairs)
  • index to those data blocks

Entries in the index are called inodes (index nodes). Inodes contain medatada (data about data) on the files, like:

  • pointer to the data blocks
  • type of thing it represents (directory, file, etc.)
  • size of the thing
  • "mode" of the thing (nine permissions bits + three bits that primarily affect the operation of executables)
  • info on owner and group

There is also a time information among the metadata. Actually three types of them:

.----------------------------------------------------------------------------------------------------.
| Type        | Short name | ls option | Description                                                 |
+-------------+------------+-----------+-------------------------------------------------------------+
| Access Time | atime      | -ult      | when file was last accessed (read)                          |
| Modify Time | mtime      | -lt       | when the actual contents of the file were last modified     |
| Change Time | ctime      | -cl       | when the inode information (the metadata) was last modified |
'-------------+------------+-----------+-------------------------------------------------------------'

Using timestamps

To get information about a file (actually about its inode) run the shell command stat or find:

  • find /home/webservice/backups/ -mtime +5 -exec rm -f {} \; -- will delete any file with content that had not changed for 5 days from now.
  • find /home/webservice/backups/ -ctime +5 -exec rm -f {} \; -- will delete any file that has been untouched for 5 days.

Getting time info with Perl

To access an inode from within Perl, use:

1. stat function (returns pretty much everything that the underlying stat Unix system call returns):

my($atime, $mtime, $ctime) = (stat($filename))[8,9,10] or die "Couldn't stat '$filename': $!";

.. $atime, $mtime, and $ctime -- The three timestamps represented in the system's timestamp format: a 32-bit number telling how many seconds have passed since the ''Epoch'', an arbitrary starting point for measuring system time (it's the beginning of 1970 at midnight Universal Time on Unix systems).

2. File::stat:

use File::stat;

my $inode = stat("/bin/ls");
my $ctime = $inode->ctime;
my $size  = $inode->size;

3. -X operators, modeled on the shell's test operators:

my @original_files = qw/ file1 file2 file2 /;  # in practice - read from the FS using a glob or directory handle
my @big_old_files;                             # files we want to put on backup tapes
foreach my $filename (@original_files) {
    push @big_old_files, $filename             
      if -s $filename > 100_000 and -A _ > 90; # -X operators cache value returned by stat(2); access it via _
}

Changing timestamps with Perl

In those rare cases when you want to lie to other programs about when a file was most recently accessed (atime) or modified (mtime), use the utime function:

my $atime = time;                 # now
my $mtime = $time - 24 * 60 * 60; # one day (86400 secs) ago
utime $atime, $mtime, glob "*";   # set access to now, mod to a day ago

.. the third timestamp (ctime) is always set to "now" whenever anything alters a file - there's no way to set it with utime

.. the primary purpose of ctime is for incremental backups - if the file's ctime is newer that the date on the backup tape, it's time to back it up again

Sources

  • Perl Cookbook
  • ULSAH

2014-07-04

Benchmarking Perl Code

(Up-to-date source of this post.)

Sometimes my code takes a really long time to run and I'd like to know which of the alternatives runs faster.

In this example I compare two sorting subroutines; a "naive" approach and "The Schwartzian Transform". The former subroutine just compares all files' sizes to each other while the latter first precomputes the size of each file and then does the comparisons.

use Benchmark qw(timethese);

chdir;    # change to my home directory
my @files = glob '*';

timethese(
    -2,
    {
        naive => sub {
            my @sorted = sort { -s $a <=> -s $b } @files;
        },
        schwartzian => sub {
            my @sorted =
              map  { $_->[0] }
              sort { $a->[1] <=> $b->[1] }
              map  { [ $_, -s $_ ] } @files;
        },
    }
);

The program's output:

Benchmark: running naive, schwartzian for at least 2 CPU seconds...
     naive:  2 wallclock secs ( 0.58 usr +  1.49 sys =  2.07 CPU) @ 11661.84/s (n=24140)
schwartzian:  2 wallclock secs ( 1.57 usr +  0.59 sys =  2.16 CPU) @ 21200.00/s (n=45792)

The output says that the Schwartzian Transform is much faster (the function ran more times in 2 seconds). The reason is that we don't ask for the file size each time we want to compare two files sizes; we ask just once for each file size.

See Also

  • http://perldoc.perl.org/Benchmark.html
  • Intermediate Perl, 2nd, p. 144
  • http://www.perlmonks.com/?node_id=393128

2014-07-03

SSH Tunnel

(Up-to-date source of this post.)

Forwarding remote port (firewall tunneling via SSH)

We want to allow the tech access the incomp (intranet) host from the outcomp.sk (Internet) host:

1) Redirect the port 2222 on outcomp.sk to port 22 on incomp:

incomp:~$ ssh -R 2222:localhost:22 user@outcomp.sk
outcomp.sk:~$ while [ 1 ]; do date; sleep 300; done  # to keep the connection open

2) Connect to intranet host:

outcomp.sk:~$ ssh -p 2222 root@localhost

We want to connect to router web interface (to make some configuration changes) which is not accessible from Internet. However we can connect to a Linux server behind the router.

1) /etc/ssh/sshd_config of host.in.internet.com has to contain:

GatewayPorts yes

2) LAN (intranet) host:

ssh -R "*:3333:192.168.1.1:443" host.in.internet.com

3) Web browser somewhere in Internet:

https://host.in.internet.com:3333

Forwarding local port

We want to connect to a remote database running on dbserver but it is configured to allow connections only from localhost (127.0.0.1). We use port 3307 on the client because the default 3306 port is already being used (e.g. you are running MySQL server on the client).

client:~$ ssh -L 3307:localhost:3306 root@dbserver
client:~$ mysql -u root -p dbname -P 3307

See also

2014-05-25

traceroute Explained

(Up-to-date source of this post.)

traceroute shows the route the packets have to take to get to a destination host. For example:

$ traceroute sdf.lonestar.org
traceroute to sdf.lonestar.org (192.94.73.15), 30 hops max, 60 byte packets
 1  192.168.1.1 (192.168.1.1)  5.475 ms  6.020 ms  6.647 ms
 2  st-static-srk231.87-197-192.telecom.sk (87.197.192.231)  8.832 ms  15.973 ms  15.933 ms
< ... >
20  ge8-7.distb1.sea2.hopone.net (209.160.60.194)  186.286 ms  186.246 ms  175.897 ms
21  SDF.ORG (192.94.73.15)  174.879 ms  174.283 ms  174.816 ms

But what does the output mean exactly and how does traceroute work?

It displays the sequence of gateways (showing the name and the IP address) through which an IP packet travels to reach its destination. The three numbers are the round trip times for each gateway. You can sometimes see the following instead of the number of miliseconds:

  • * -- no response (error packet) received [congestion or ICMP packet was dropped because it has a low priority]
  • * * * -- no "time exceed" messages received at all [gateway is down, firewall discards the packets or packets are slow to return]
  • !N, !H, !P -- "network unreachable", "host unreachable", "protocol unreachable" - in any of these cases usually this is the last gateway you can get to [routing problem or a broken network link]

traceroute works by sending three packets to each gateway on its route. These packets have artificially low TTL field (actually "hop count to live") set. The first three packets have TTL of 1. When they reach the gateway their TTL is descreased and when it reaches 0 the gateway discards the packet and sends back an ICMP "time exceeded" message. The originating hosts exctracts the gateway's IP address from the header of the error packet and resolves it to a name by using the DNS. This process repeats until the destination is reached or the gateway number limit (30) is exceeded.

2014-04-26

LDAP

(Up-to-date source of this post.)

Concepts and terms

  • protocol for querying and modifying a X.500-based directory service running over TCP/IP
  • current version - LDAPv3 (defined in RFC4510)
  • Debian uses OpenLDAP implementation (slapd package - recent versions are compiled with GnuTLS instead of OpenSSL due to licencing concerns)
  • can be used for network authentication (login), similarly like Kerberos, Windows NT domains, NIS, AD ("LDAP + Kerberos")
    • replacement for useradd, usermod, passwd, /etc/passwd, /etc/shadow
  • good for bigger networks

LDAP directory

  • hierarchical DB, more often read than written
  • tree of entries or directory information tree (DIT)
  • LDAP directory root = base

LDAP entry

  • consists of set of attributes
  • an attribute has a type (a name/description) and one or more values
  • every attribute has to be defined in a at least one objectClass (a special kind of attribute)
  • attributes and objeclasses are defined in schemas
  • each entry has a unique identifier: distinguished name (DN) = relative distinguished name (RDN) + parent entry's DN
    • DN: "cn=John Doe,dc=example,dc=com"
    • RDN: "cn=John Doe"
    • parent DN: "dc=example,dc=com"
  • DN in not an attribute, i.e. no part of the entry itself

Preparing system to use LDAP (Debian 6.0.7)

Set FQDN if not already set (it is used by slapd for initial configuration):

  • /etc/hostname (this file should only contain the hostname and not the full FQDN):

    ldap
    
  • /etc/hosts:

    127.0.0.1       ldap.example.com ldap localhost
    # ... IPv6 stuff skipped ...
    1.2.3.4         ldap.example.com ldap
    
  • Restart networking:

    invoke-rc.d hostname.sh start
    invoke-rc.d networking force-reload
    
  • check hostname and FQDN

    $ hostname
    $ hostname -f
    

Install packages:

aptitude install slapd ldap-utils

Configure ldap-utils (client programs):

cp -p /etc/ldap/ldap.conf{,.orig}

cat << EOF > /etc/ldap/ldap.conf
# LDAP base - usually domain name
BASE        dc=example,dc=com
# ldap://, ldaps://
URI         ldaps://ldap.example.com
# certificate file (encryption)
TLS_CACERT  /etc/ldap/ssl/certs/slapd-cert.crt
EOF

LDAP + TLS (Debian 6.0.7)

Configure TLS:

  • Create private key for certificate authority (CA):

    certtool --generate-privkey --outfile /etc/ssl/private/ca.<example.com>.key
    
  • Create the template file /etc/ssl/ca.info to define the CA:

    cn = <Example Company>
    ca
    cert_signing_key
    
  • Create self-signed CA certificate:

    certtool --generate-self-signed \
    --load-privkey /etc/ssl/private/ca.<example.com>.key \
    --template /etc/ssl/ca.info \
    --outfile /etc/ssl/certs/ca.<example.com>.cert
    
  • Make a private key for the server:

    certtool --generate-privkey \
    --bits 1024 \
    --outfile /etc/ssl/private/ldap.<example.com>.key
    
  • Create template file /etc/ssl/ldap.info:

    organization = <Example Company>
    cn = ldap.<example.com>
    tls_www_server
    encryption_key
    signing_key
    expiration_days = 3650
    
  • Create the server's certificate:

    certtool --generate-certificate \
    --load-privkey /etc/ssl/private/ldap.<example.com>.key \
    --load-ca-certificate /etc/ssl/certs/ca.<example.com>.cert \
    --load-ca-privkey /etc/ssl/private/ca.<example.com>.key \
    --template /etc/ssl/ldap.info \
    --outfile /etc/ssl/certs/ldap.<example.com>.cert
    
  • Create configuration file /etc/ssl/certinfo.ldif:

    dn: cn=config
    add: olcTLSCACertificateFile
    olcTLSCACertificateFile: /etc/ssl/certs/ca.<example.com>.cert
    -
    add: olcTLSCertificateFile
    olcTLSCertificateFile: /etc/ssl/certs/ldap.<example.com>.cert
    -
    add: olcTLSCertificateKeyFile
    olcTLSCertificateKeyFile: /etc/ssl/private/ldap.<example.com>.key
    
  • Add configuration to LDAP:

    ldapmodify -Y EXTERNAL -H ldapi:/// -f /etc/ssl/certinfo.ldif
    
  • Set up ownership and permissions of private key:

    adduser openldap ssl-cert
    chgrp ssl-cert /etc/ssl/private/ldap.<example.com>.key
    chmod g+r /etc/ssl/private/ldap.<example.com>.key
    chmod o-r /etc/ssl/private/ldap.<example.com>.key
    
  • Edit /etc/default/sldapd (Ubuntu says it's not needed):

    SLAPD_SERVICES="ldap://127.0.0.1:389/ ldaps:/// ldapi:///"
    
  • Restart and check LDAP

    service slapd restart
    
    
    netstat -tlpn | grep slapd
    tcp        0      0 0.0.0.0:636             0.0.0.0:*               LISTEN      16161/slapd
    tcp        0      0 127.0.0.1:389           0.0.0.0:*               LISTEN      16161/slapd
    tcp6       0      0 :::636                  :::*                    LISTEN      16161/slapd
    

Populate LDAP via LDIF files

Create LDIF (LDAP Data Interchange Format) file with basic tree structure (/var/tmp/tree.ldif):

# Account directory
dn: ou=People,dc=example,dc=com
ou: People
objectClass: organizationalUnit

# Group directory
dn: ou=Group,dc=example,dc=com
ou: Group
objectClass: organizationalUnit

Create LDIF file with user account information (/var/tmp/acct.ldif):

# User data (equivalent to /etc/passwd)
dn: uid=jlebowski,ou=people,dc=example,dc=com
uid: jlebowski
uidNumber: 1010
gidNumber: 100
cn: Jeffrey
sn: Lebowski
displayName: JeffreyLebowski
mail: the.dude@example.com
objectClass: top
objectClass: person
objectClass: posixAccount
objectClass: shadowAccount
objectClass: inetOrgPerson
loginShell: /bin/bash
homeDirectory: /home/jlebowski

# Group data (equivalent to /etc/group)
dn: cn=users,ou=Group,dc=example,dc=com
objectClass: posixGroup
objectClass: top
cn: users
gidNumber: 100
memberUid: jlebowski

Adding information from LDIF files to LDAP:

ldapadd -c -x -D cn=admin,dc=example,dc=com -W -f /var/tmp/tree.ldif
ldapadd -c -x -D cn=admin,dc=example,dc=com -W -f /var/tmp/acct.ldif
  • -c -- continue even if errors are detected
  • -x -- use a simpler authentication rather than the default SASL
  • -D -- binds to the directory using the specified distinguished name (DN)
  • -W <binddn> -- prompt for authentication
  • -f <file> -- read LDIF records from

Accounts management

Changing password - one of:

  • use slappasswd, cut/paste the hash into the LDIF file and run it through ldapmodify:

    ldappasswd -D cn=admin,dc=example,dc=com -W -S uid=jlebowski,ou=People,dc=example,dc=com
    
    • -S -- prompts for the new password
  • if PAM is configured correctly, user can use passwd on an LDAP client

Deleting accounts:

    ldapdelete -c -x -D cn=admin,dc=example,dc=com -W uid=jlebowski,ou=people,dc=example,dc=com

Querying a server about accounts

Linux

See also Querying Active Directory with Unix LDAP tools.

  • getent returns info from various sources, including local account DB

    getent passwd jlebowski
    

  • you can use filters (conceptually similar to regexes):

    ldapsearch -x uid=jlebowski
    
    • (&(uid=jlebowski)(!(ou=Accounting))) -- search for jlebowski who is not a member of the Accounting department

  • when ldapsearch sees UTF-8 encoding it displays it as base64, so you need to convert it:

    ldapsearch -x -h ldap.company.com -b 'dc=company,dc=com' -s sub -D 'user@company.com' -S 'employeeID' \
    -W '(&(objectClass=person)(employeeID>=0)(employeeID<=20416))' employeeID title sn | PERL_UNICODE=S \
    perl -MMIME::Base64 -MEncode=decode -n -00 -e 's/
    

    //g;s/(?<=:: )(\S+)/decode("UTF-8",decode_base64($1))/eg;print' \

    ldap.out

Windows

Use GUI tools like ADExplorer or LDAP Admin.

More

2014-03-20

LVM

(Up-to-date source of this post.)

LVM is the implementation of logical volume management in Linux. As I don’t use it on a day-to-day basis, I created this blog in case I forgot the basics :-).

Terminology

       sda1   sdc       (PVs on partitions or whole disks)
          \   /
           \ /
          diskvg        (VG)
          /  |  \
         /   |   \
     usrlv rootlv varlv (LVs)
       |      |     |
    ext4  reiserfs  xfs (filesystems)
  • Physical volume (PV) — partition (ex. /dev/sda1), disk (ex. /dev/sdc) or RAID device (ex. /dev/md0)
  • Volume group (VG) — group of physical volumes (ex. diskvg)
  • Logical volume (LV) — equivalent of standard partitions, where filesystems can be created (ex. /usrlv)

Working with LVM

Creating Volumes

  1. Create PV (initialize disk)

    pvcreate /dev/md0
    

    Check the results with pvdisplay

  2. Create VG

    vgcreate raid1vg /dev/md0
    

    Check the results with vgdisplay

  3. Create LV

    lvcreate --name backuplv --size 50G raid1vg
    

    Check the results with lvdisplay

  4. Create filesystem

    mkfs.ext3 /dev/raid1vg/backuplv
    
  5. Edit /etc/fstab

    # RAID 1 + LVM
    /dev/raid1vg/backuplv   /backup        ext3    rw,noatime      0       0
    
  6. Create mount point and mount volume(s)

    mkdir -p /backup
    mount -a
    

Extending LV

  1. Extend the LV

    lvextend -L +5G /dev/raid1vg/backuplv
    
  2. Re-size the filesystem (online re-sizing doesn’t seem to cause troubles)

    resize2fs /dev/raid1vg/backuplv
    

2014-02-16

Fixing Email Aliases when Using SSMTP

(Up-to-date source of this post.)

I've found out that one of the simplest ways how to send emails from scripts running on your workstation is ssmtp (my ex-colleague showed it to me). It's very easy to install and setup. Basically you just edit one or two lines in ssmtp.conf. However there's a caveat; the ssmtp does not consider aliases when sending email. So the cron was trying to send email to non existent email address like root@mybox.local.domain. To fix this problem you have to do the aliasing in the mail program by adding lines like these into /etc/mail.rc:

alias root root<username@company.com>
alias postmaster postmaster<username@company.com>
alias username username<username@company.com>

I also put this variable into my crontab
MAILTO=username@company.com

2014-01-26

Clone and Resize KVM Virtual Machine

(Up-to-date source of this post.)

I needed to upgrade (from Squeeze to Wheezy) some important virtual servers. As I wanted a minimal impact of the upgrade, I chose this procedure:

  1. Create identical copy of the server to upgrade
  2. Upgrade the copy
  3. Upgrade the server if everything worked ok with the copy

The servers to upgrade were virtual machines (VMs) running on KVM. I also discovered that some servers needed more space because their disks had filled up during upgrade. So disk resize was needed. The following steps did the task:

1) Copy the image (.qcow2) and the configuration (.xml) files to some other location. The image file should ideally be copied from a snapshot to avoid data inconsistencies a running machine could create.

2) Edit the following fields in the copied .xml file accordingly

name
uuid
source dev    # make sure you enter the copied image path!
mac address
source bridge # change the VLAN to avoid IP address conflicts

3) Boot the cloned VM and change the hostname and IP address by editing these files:

/etc/network/interfaces
/etc/hostname
/etc/hosts

4) Change back the VLAN and shutdown the cloned VM

5) Increase the disk size

# convert the qcow image to a plain raw file
qemu-img convert system.qcow -O raw system.raw
# create a dummy file (filled with zeros) of the size of extra space you want to add to your image (here 1GB)
dd if=/dev/zero of=zeros.raw bs=1024k count=1024
# add your extra space to your raw system image without fear
cat system.raw zeros.raw > big.raw
# finally convert back your raw image to a qcow file not to waste space
qemu-img convert big.raw -O qcow growed-system.qcow

6) Boot the cloned VM and using cfdisk delete the old small partition and create a new one with the free space

7) Increase the filesystem using:

e2fsck -f
resize2fs

Make sure the VM's image file (.qcow) has the correct access rights, otherwise your system might have disk related problems (I was bitten by this and got helped by my nice colleague).

2014-01-06

Simple Source Code Management with Git

(Up-to-date source of this post.)

Although I'm more of a sysadmin than a developer I often write scripts (in Perl or Bash). And I tend to use Git for tracking my programs. Every Git repository contains complete history of revisions and is not dependent on a central server or network access. I don't work within a big group of developers, so I try to keep things simple.

First time setup


The following steps are usually needed to be done only once on a machine and are global for all Git repositories:

git config --global user.name "Jeffrey Lebowski"
git config --global user.email "jeffrey.lebowski@dude.com"
git config --global color.ui true
git config alias.lol 'log --pretty=oneline --abbrev-commit --graph --decorate'
git config --global core.editor vim
git config --global merge.tool vimdiff

The I can check the configuration like this:

git config --list

or

cat ~/.gitconfig

Starting a new Git repository


One way to start working with Git is to initialize a directory as a new Git repository:

mkdir project
cd project
git init

Notice a new .git directory. Then I take the snapshot of the contents of all files within current working directory (.):

git add .    # temporary storage - index

This command adds the files to a temporary staging area called index. To permanently store the index:

git commit   # permanent storage

I enter a commit message and I'm done.

Cloning a Git repository


I often need to get an already existing repository (some code me or someone else has already written and stored for example on GitHub):

## Any server with ssh and git
git clone ssh://[user@]server.xy/path/to/repo.git/
## GitHub
git clone git://github.com/ingydotnet/....git

Working with Git


When dealing with Git, it's best to work in small bits. Rule of thumb: if you can't summarize it in a sentence, you've gone too long without committing.

My typical working cycle is:

1. Work on my project.

2. Check whether something has changed:

git status

3. Check what has changed:

git diff

4. Add and commit changes (combines two steps in one git add + git commit):

git commit -am "commit message"

If I not only changed files but also added some new ones I have to add them explicitly:

git add newfile1 newfile2 newfolder3

and then commit as in step 4.

Excluding some files


To set certain files or patterns to be ignored by Git, I create a file called .gitignore in my project’s root directory:

# Don't track dot-files
.*
!/.gitignore

.gitignore is usually checked into Git and distributed along with everything else.