Partition Tables

Many computer users are familiar with the basic idea of filesystems. A storage device is divided into partitions each formatted to a particular filesystem that holds files. Well, just as the filesystem hold the files, a partition table holds the filesystems. There are a few partition table types. The most commonly known one is MBR.

Master Boot Record (MBR) – Most IBM-based PC storage units use this partition table format. MBR is often referred to as the msdos partition table. The MBR can only support storage devices up to two terabytes. MBR supports the concept of logical and primary partitions. A storage unit with the MBR table can only have up to four primary partitions. Many users wanting to make a multiboot system with more than four Linux distros often have the problem of not being able to support more partitions. Remember, logical partitions cannot be made bootable. Multiboot systems must use a different partition table discussed later.

GUID Partition Table (GPT) – Some IBM-based PC storage units have GPT, although it is usually because the user reformatted from MBR to GPT. However, most Intel-based Mac systems use GPT by default. The GPT partition table offers many improvements over MBR. GPT can support storage units up to over nine zettabytes. GPT is also the most highly recommended partition table for computers needing more than four operating systems on one hard-drive. For example, if a computer with a ten terabyte hard-disk is meant to be a multiboot system for seven different Linux distros, then GPT should be used. Most Unix and Unix-like operating systems can fully support GPT. However, most Windows systems cannot run on a GPT partition table. As for Mac systems, only the Intel-based ones can boot from GPT.

Apple Partition Map (APM) – The PowerPC-based Mac systems can only boot from APM partition tables. This is usually referred to as the Mac or Apple partition table. Linux and Intel-based Macs can use APM. Windows does not support APM.

Amiga rigid disk block (RDB) – Amiga systems use the RDB partition table. These partition tables support up to about 4*10^19TB. That is forty quintillion terabytes.

AIX – The AIX partition table is used by proprietary AIX systems. By default, Linux does not natively support the AIX partition table.

BSD – BSD Unix systems can use the BSD partition table. Linux and Windows cannot read BSD partition tables.

Others – Some other partition table formats are listed below. The below listed are very rarely used. Not much information can be seen on the Internet about them.

dvh
humax
pc98
sgi
sun

Removable Storage – You may be wondering, “Which partition table do flash drives, SD cards, etc. use?”. Well, since all systems can at least read MBR, the majority of mobile/removable storage uses MBR.

Formatting the partition table – To change or reformat a partition table, use Gparted and click “Device > Create Partition Table”. Then, choose the desired partition table. Alternately, Parted can be used to format a storage device with a particular partitioning table (also called a “disk-label”). Doing so will erase all partitions and data on the selected storage device. The command is “parted mklabel DISKLABEL”. The command requires Root privileges. The user will need to create new partitions for the storage device. Supported partitioning tables (supported by parted) include the listed below.
bsd
loop (raw disk access)
gpt
mac (Apple Partition Map (APM))
msdos (commonly called MBR)
pc98
sun

WARNING: Changing the partition table will erase the filesystems, partitions, and files. This is a more “low-level” format. However, the files are not truly gone. Read http://www.linux.org/threads/undelete-files-on-linux-systems.4316/ to fully understand the “deletion” of files.

You may also be wondering which is the best one for you. Well, use MBR with Windows and mobile systems (like Android), APM on PowerPC Macs and iOS, RDB on Amiga, and GPT on all other systems. However, you may have specific reasons for placing an OS on a different partition table than what is recommended one sentence previous.

Which Distro is Right for Me?

Debian Debian Linux will be well-suited for those who need stability. Debian Linux uses older software that is known to be stable. Generally, hospitals that use Linux will use Debian on important systems. Debian is usually a wise choice for a server system because the software is usually stable. The recommended system requirements are 1GHz processor, 512MB memory, 5GB hard-drive. http://www.debian.org/distrib/

Ubuntu For those that like Debian, but want the latest software and an interface with better graphics, Ubuntu is a common choice. Ubuntu is stable, but many Linux users recommend Debian for critical systems. The average mainstream desktop/laptop user will probably want Ubuntu. The recommended system requirements are 800MB memory, 1GHz processor, and 5GB hard-drive. http://www.ubuntu.com/download

Kubuntu Same as Ubuntu, but uses KDE. Users that dislike Unity may prefer Kubuntu. The recommended system requirements are 1GHz processor, 10GB hard-drive, and more than 1GB memory. http://www.kubuntu.org/getkubuntu

Xubuntu Xubuntu is a lightweight Ubuntu system for older hardware or hardware with less resources. Xubuntu uses the XFCE interface instead of Unity. The recommended system requirements are 512MB memory and 5GB hard-drive (tyr Lubuntu for something more lightweight). http://xubuntu.org/getxubuntu/
Linux Mint: For people that want a Debian-based system, but dislike Unity may be interested in Linux Mint. Linux Mint may come with the MATE, Cinnamon, XFCE, or KDE interface (user’s choice). The recommended system requirements are 1GHz processor, 1GB memory, and 10GB hard-drive. http://www.linuxmint.com/download.php

BackTrack (Kali) This is a Ubuntu-based high-security system. I would recommend this for Anonymous. BackTrack (now called Kali) is often used for hacking into other systems. Although, that is illegal unless you are hacking into a computer of your because you forgot the password. BackTrack/Kali is also used to evaluate security. Some companies may use BackTrack/Kali to find security flaws in their own system. http://www.kali.org/downloads/

Slackware Slackware is a simple lightweight system. Usually, Slackware is preferred among advanced users due to Slackware being less of a user-friendly system compared to other distros. The recommended system requirements are i486 processor, 256MB memory, and 5GB hard-drive. Advanced users wanting a lightweight system may prefer Slackware. http://www.slackware.com/

Arch Arch Linux is a minimalistic system that is supposedly very simple. It is also a lightweight system that is used among advanced Linux users. Advanced users that dislike Slackware may like Arch. https://www.archlinux.org/download/

Fedora Some Linux users may say Fedora is the RedHat counterpart of Ubuntu (Debian system). Fedora is perfect for many mainstream desktop/laptop users. Fedora handles graphics well and uses appealing interfaces. The recommended system requirements are 1GB memory and 10GB hard-drive. http://fedoraproject.org/en/get-fedora

Red Hat Enterprise Linux RedHat is usually used as a server system. Fedora is the client/desktop system while RedHat is the server “version”. So, if you would like to use Fedora as a server or need a system that is more stable than Fedora, then use RedHat.

Puppy Linux This is a very lightweight system that is usually used on older systems due to the light requirements. Puppy Linux may not have the best-looking interface, but it is still easy to use. The recommended system requirements are 333MHz processor, 64MB memory, 512MB swap, and 1GB hard-drive. http://puppylinux.org/main/Download Latest Release.htm

AnitaOS This is a form of Puppy Linux developed by @Darren Hale intended for old hardware. AnitaOS uses old kernels while the mainstream Puppy Linux uses the newer kernels. http://sourceforge.net/projects/anitaos/ | http://www.linux.org/threads/anitaos-a-diy-distro-you-build-it-yourself.4401/

Damn Small Linux (DSL) This is a lightweight Linux system that requires 8MB of memory and at least an i486 processor. People needing a lightweight system may want DSL if they dislike Puppy Linux. http://www.damnsmalllinux.org/download.html

CentOS CentOS is often comparable to Linux Mint, but CentOS is Red-Hat-based instead of Debian-based. In fact, CentOS is RHEL without the branding. Basically, if you want RHEL, but do not want to pay for it and support, then get CentOS. People who like Linux Mint, but want a Red-Hat system may be interested in CentOS. The recommended system requirements are 256MB memory and 256MB hard-drive. http://www.centos.org/modules/tinycontent/index.php?id=30

OpenSUSE OpenSUSE is a RedHat-based distro that has YaST and ZYpp. OpenSUSE is available as a rolling release or a stable version-by-version basis. The minimum requirements include 2GB memory, 5GB hard-space, AMD64 or Intel 2.4GHz. http://www.opensuse.org

If a distro containing no closed-source software anywhere in the system is needed, then check out GNU.org’s list of 100% open-source GNU/Linux operating systems – https://www.gnu.org/distros/free-distros.en.html

Updating Apache to the latest version on DirectAdmin

You can check the current version of apache by running
/usr/sbin/httpd -v

CustomBuild – current

If you’re using custombuild (as most new boxes are), run the following
cd /usr/local/directadmin/custombuild
./build update
./build apache
./build php n
./build rewrite_confs

CustomApache – end-of-life

If you are using customapache with the 1.3 version of apache to the most recent, run the following:
cd /usr/local/directadmin/customapache
./build clean
./build update
./build apache_mod_ssl

If you’re using apache 2.x, use “./build apache_2” isntead of apache_mod_ssl.
This should update both the configure options and the version of apache to the most recent version. Once the update has completed, you’ll need to restart apache:

RedHat:
/sbin/service httpd restart

FreeBSD:
/usr/local/etc/rc.d/httpd restart

Upgrading OpenSSH on CentOS

First, download the OpenSSH source tarball from the vendor and unpack it. You can find the tarballs at http://www.openssh.com/portable.html

cd /usr/src

wget http://mirror.team-cymru.org/pub/OpenBSD/OpenSSH/portable/openssh-6.8p1.tar.gz

tar -xvzf openssh-6.8p1.tar.gz

You may need to install a few things for the RPM build to work:

yum install rpm-build gcc make wget openssl-devel krb5-devel pam-devel libX11-devel xmkmf libXt-devel

Copy the spec file and tarball:

mkdir -p /root/rpmbuild/{SOURCES,SPECS}

cp ./openssh-6.8p1/contrib/redhat/openssh.spec /root/rpmbuild/SPECS/

cp openssh-6.8p1.tar.gz /root/rpmbuild/SOURCES/

Do a little magic:

cd /root/rpmbuild/SPECS
sed -i -e "s/%define no_gnome_askpass 0/%define no_gnome_askpass 1/g" /usr/src/redhat/SPECS/openssh.spec
sed -i -e "s/%define no_x11_askpass 0/%define no_x11_askpass 1/g" /usr/src/redhat/SPECS/openssh.spec
sed -i -e "s/BuildPreReq/BuildRequires/g" /usr/src/redhat/SPECS/openssh.spec

…and build your RPM:

rpmbuild -bb openssh.spec

Now if you go back into /root/rpmbuild/RPMS/<arch> , you should see three RPMs. Go ahead and install them:

rpm -Uvh *.rpm

To verify the installed version, just type ‘ssh -v localhost’ and you should see the banner come up, indicating the new version.

*IMPORTANT! You may want to open a new SSH session to your server before exiting, to make sure everything is working! If you have a problem, simply:

yum downgrade openssh-server

Linux and UNIX view command-line history

BASH is the default shell for Linux computers. Bash has history command. It display the history list with line numbers i.e. it lists everything you have entered on the command line. You can recall commands from history so that you can save the time.

Task: View your command Line History

Type the following command:

$ history

Output:

911 man 7 signal
912 man ps
913 man 7 signal
914 man killall
915 killall -l
916 man killall
917 su –
918 su – lighttpd
919 su – lighttpd
920 cd /tmp/

So whenever you type a command, that command is saved is a file called .bash_history. Type following commands to get more info:

$ help history

$ man bash (look for Event Designators for more info).

FreeBSD Security Update(s) Issued

A couple patches were released for FreeBSD to address various security vulnerabilities and it is recommended that you update as soon as possible.

Official Links:

https://www.freebsd.org/security/advisories/FreeBSD-SA-15:03.sctp.asc

https://www.freebsd.org/security/advisories/FreeBSD-SA-15:02.kmem.asc

Please look at the affected versions under each security advisory to determine if you are impacted and need to take any action.

vBSEO Possible Exploit

vBulletin has notified that there is a possible exploit in vBSEO. Given that the software is no longer being developed, you will have to manually apply the fix:

vbseo/includes/functions_vbseo_hook.php:

if(isset($_REQUEST[‘ajax’]) && isset($_SERVER[‘HTTP_REFERER’]))
$permalinkurl = $_SERVER[‘HTTP_REFERER’].$permalinkurl;

Should be changed to:

// if(isset($_REQUEST[‘ajax’]) && isset($_SERVER[‘HTTP_REFERER’]))
// $permalinkurl = $_SERVER[‘HTTP_REFERER’].$permalinkurl;

Ongoing Discussion via WHT:

http://www.webhostingtalk.com/showthread.php?t=1444268

Coca Cola jumps on IBM’s private Cloud

IBM has won a five-year contract to manage Coca Cola Amatil’s (CCA) mission-critical SAP infrastructure on its private Cloud hosted in its Sydney datacentre.  The project is expected to “drive significant operational costs” out of CCA’s South Pacific business. The company operates in Australia, NZ, Indonesia, PNG, Fiji, and Samoa.
Read rest of the story here

Top 15 Most Popular File Sharing Websites | December 2014

Here are the top 15 Most Popular File Sharing Sites from December 2014;

 

1 | DropBox
35,000,000 – Estimated Unique Monthly Visitors | 114 – Compete Rank | 314 – Quantcast Rank | 110 – Alexa Rank |

2 | MediaFire
22,500,000 – Estimated Unique Monthly Visitors | 531 – Compete Rank | NA – Quantcast Rank | 171 – Alexa Rank | 

3 | 4Shared
21,000,000 – Estimated Unique Monthly Visitors | 936 – Compete Rank | 239 – Quantcast Rank | 169 – Alexa Rank | 

4 | Google Drive
18,500,000 – Estimated Unique Monthly Visitors | *100* – Compete Rank |*1,000* – Quantcast Rank | *NA* – Alexa Rank | 

5 | SkyDrive
16,000,000 – Estimated Unique Monthly Visitors | *650* – Compete Rank |*550* – Quantcast Rank | *NA* – Alexa Rank | 

6 | iCloud
9,500,000 – Estimated Unique Monthly Visitors | 429 – Compete Rank | *1,536*- Quantcast Rank | 987 – Alexa Rank | 

7 | Box
6,750,000 – Estimated Unique Monthly Visitors | 1,274 – Compete Rank |1,028 – Quantcast Rank | 877 – Alexa Rank | 

8 | Mega
6,500,000 – Estimated Unique Monthly Visitors | 1,936 – Compete Rank |*NA* – Quantcast Rank | 670 – Alexa Rank | 

9 | ZippyShare
6,250,000 – Estimated Unique Monthly Visitors | 2,337 – Compete Rank |*NA* – Quantcast Rank | 370 – Alexa Rank | 

10 | Uploaded
6,000,000 – Estimated Unique Monthly Visitors | 2,943 – Compete Rank |*NA* – Quantcast Rank | 292 – Alexa Rank | 

11 | DepositFiles
4,750,000 – Estimated Unique Monthly Visitors | 4,709 – Compete Rank |*2,913* – Quantcast Rank | 1,927 – Alexa Rank | 

12 | HighTail
4,500,000 – Estimated Unique Monthly Visitors | 2,459 – Compete Rank |*4,297* – Quantcast Rank | 2,938 – Alexa Rank | 

13 | SendSpace
4,000,000 – Estimated Unique Monthly Visitors | 5,790 – Compete Rank |2,551 – Quantcast Rank | 1,474 – Alexa Rank | 

14 | RapidShare
3,250,000 – Estimated Unique Monthly Visitors | 7,217 – Compete Rank | NA- Quantcast Rank | 2,084 – Alexa Rank | 

15 | FileCrop.com
3,000,000 – Estimated Unique Monthly Visitors | 14,239 – Compete Rank |*NA* – Quantcast Rank | 5,823 – Alexa Rank |

KernelCare Security Update Issued

An update for KernelCare was just released to address CVE-2014-9322 and your OS should be automatically updated, unless specified otherwise.

Remote backup service

A remote, online, or managed backup service, sometimes marketed as cloud backup, is a service that provides users with a system for the backup, storage, and recovery of computer files. Online backup providersare companies that provide this type of service to end users (or clients). Such backup services are considered a form of cloud computing.

Online backup systems are typically built around a client software program that runs on a schedule, typically once a day, and usually at night while computers aren’t in use. This program typically collects, compresses, encrypts, and transfers the data to the remote backup service provider’s servers or off-site hardware.

Characteristics

Service-based

  1. The assurance, guarantee, or validation that what was backed up is recoverable whenever it is required is critical. Data stored in the service provider’s cloud must undergo regular integrity validation to ensure its recoverability.
  2. Cloud BUR (BackUp & Restore) services need to provide a variety of granularity when it comes to RTO’s (Recovery Time Objective). One size does not fit all either for the customers or the applications within a customer’s environment.
  3. The customer should never have to manage the back end storage repositories in order to back up and recover data.
  4. The interface used by the customer needs to enable the selection of data to protect or recover, the establishment of retention times, destruction dates as well as scheduling.
  5. Cloud backup needs to be an active process where data is collected from systems that store the original copy. This means that cloud backup will not require data to be copied into a specific appliance from where data is collected before being transmitted to and stored in the service provider’s data centre.

Ubiquitous access

  1. Cloud BUR utilizes standard networking protocols (which today are primarily but not exclusively IP based) to transfer data between the customer and the service provider.
  2. Vaults or repositories need to be always available to restore data to any location connected to the Service Provider’s Cloud via private or public networks.

Scalable and elastic

  1. Cloud BUR enables flexible allocation of storage capacity to customers without limit. Storage is allocated on demand and also de-allocated as customers delete backup sets as they age.
  2. Cloud BUR enables a Service Provider to allocate storage capacity to a customer. If that customer later deletes their data or no longer needs that capacity, the Service Provider can then release and reallocate that same capacity to a different customer in an automated fashion.

Metered by use

  1. Cloud Backup allows customers to align the value of data with the cost of protecting it. It is procured on a per-gigabyte per month basis. Prices tend to vary based on the age of data, type of data (email, databases, files etc.), volume, number of backup copies and RTOs.

Shared and secure

  1. The underlying enabling technology for Cloud Backup is a full stack native cloud multitenant platform (shared everything).
  2. Data mobility/portability prevents service provider lock-in and allows customers to move their data from one Service Provider to another, or entirely back into a dedicated Private Cloud (or a Hybrid Cloud).
  3. Security in the cloud is critical. One customer can never have access to another’s data. Additionally, even Service Providers must not be able to access their customer’s data without the customer’s permission.

Enterprise-class cloud backup

An enterprise-class cloud backup solution must include an on-premise cache, to mitigate any issues due to inconsistent Internet connectivity.

Hybrid cloud backup is a backup approach combining Local backup for fast backup and restore, along with Off-site backup for protection against local disasters. According to Liran Eshel, CEO of CTERA Networks, this ensures that the most recent data is available locally in the event of need for recovery, while archived data that is needed much less often is stored in the cloud.

Hybrid cloud backup works by storing data to local disk so that the backup can be captured at high speed, and then either the backup software or a D2D2C (Disk to Disk to Cloud) appliance encrypts and transmits data to a service provider. Recent backups are retained locally, to speed data recovery operations. There are a number of cloud storage appliances on the market that can be used as a backup target, including appliances from CTERA Networks, Nasuni, StorSimple and TwinStrata. Examples of Enterprise-class cloud backup solutions include StoreGrid, Datto, GFI Software’s MAX Backup, and IASO Backup.

Recent improvements in CPU availability allow increased use of software agents instead of hardware appliances for enterprise cloud backup.  The software-only approach can offer advantages including decreased complexity, simple scalability, significant cost savings and improved data recovery times. Examples of no-appliance cloud backup providers include Intronis and Zetta.net.

Typical features

Encryption
Data should be encrypted before it is sent across the internet, and it should be stored in its encrypted state. Encryption should be at least 256 bits, and the user should have the option of using his own encryption key, which should never be sent to the server.
Network backup
A backup service supporting network backup can back up multiple computers, servers or Network Attached Storage appliances on a local area network from a single computer or device.
Continuous backup – Continuous Data Protection
Allows the service to back up continuously or on a predefined schedule. Both methods have advantages and disadvantages. Most backup services are schedule-based and perform backups at a predetermined time. Some services provide continuous data backups which are used by large financial institutions and large online retailers. However, there is typically a trade-off with performance and system resources.
File-by-File Restore
The ability for users to restore files themselves, without the assistance of a Service Provider by allowing the user select files by name and/or folder. Some services allow users to select files by searching for filenames and folder names, by dates, by file type, by backup set, and by tags.
Online access to files
Some services allow you to access backed-up files via a normal web browser. Many services do not provide this type of functionality.
Data compression
Data will typically be compressed with a lossless compression algorithm to minimize the amount of bandwidth used.
Differential data compression
A way to further minimize network traffic is to transfer only the binary data that has changed from one day to the next, similar to the open source file transfer service Rsync.  More advanced online backup services use this method rather than transfer entire files.
Bandwidth usage
User-selectable option to use more or less bandwidth; it may be possible to set this to change at various times of day.
Off-Line Backup
Off-Line Backup allows along with and as part of the online backup solution to cover daily backups in time when network connection is down. At this time the remote backup software must perform backup onto a local media device like a tape drive, a disk or another server. The minute network connection is restored remote backup software will update the remote datacenter with the changes coming out of the off-line backup media .
Synchronization
Many services support data synchronization allowing users to keep a consistent library of all their files across many computers. The technology can help productivity and increase access to data.

Common features for business users

Bulk restore
A way to restore data from a portable storage device when a full restore over the Internet might take too long.
Centralized management console
Allows for an IT department or staff member to monitor backups for the user.
File retention policies
Many businesses require a flexible file retention policy that can be applied to an unlimited number of groups of files called “sets”.
Fully managed services
Some services offer a higher level of support to businesses that might request immediate help, proactive monitoring, personal visits from their service provider, or telephone support.
Redundancy
Multiple copies of data backed up at different locations. This can be achieved by having two or more mirrored data centers, or by keeping a local copy of the latest version of backed up data on site with the business.
Regulatory compliance
Some businesses are required to comply with government regulations that govern privacy, disclosure, and legal discovery. A service provider that offers this type of service assists customers with proper compliance with and understanding of these laws.
Seed loading
Ability to send a first backup on a portable storage device rather than over the Internet when a user has large amounts of data that they need quickly backed up.
Server backup
Many businesses require backups of servers and the special databases that run on them, such as groupware, SQL, and directory services.
Versioning
Keeps multiple past versions of files to allow for rollback to or restoration from a specific point in time.

Cost factors

Online backup services are usually priced as a function of the following things:

  1. The total amount of data being backed up.
  2. The number of machines covered by the backup service.
  3. The maximum number of versions of each file that are kept.
  4. Data retention and archiving period options
  5. Managed backups vs. Unmanaged backups
  6. The level of service and features available

Some vendors limit the number of versions of a file that can be kept in the system. Some services omit this restriction and provide an unlimited number of versions. Add-on features (plug-ins), like the ability to back up currently open or locked files, are usually charged as an extra, but some services provide this built in.

Most remote backup services reduce the amount of data to be sent over the wire by only backing up changed files. This approach to backing up means that the customers total stored data is reduced. Reducing the amount of data sent and also stored can be further drastically reduced by only transmitting the changed data bits by binary or block level incremental backups. Solutions that transmit only these changed binary data bits do not waste bandwidth by transmitting the same file data over and over again if only small amounts change.

Advantages

Remote backup has advantages over traditional backup methods:

  • Perhaps the most important aspect of backing up is that backups are stored in a different location from the original data. Traditional backup requires manually taking the backup media offsite.
  • Remote backup does not require user intervention. The user does not have to change tapes, label CDs or perform other manual steps.
  • Unlimited data retention (presuming the backup provider stays in business).
  • Backups are automatic.
  • Some remote backup services will work continuously, backing up files as they are changed.
  • Most remote backup services will maintain a list of versions of your files.
  • Most remote backup services will use a 128 – 448 bit encryption to send data over unsecured links (i.e. internet)
  • A few remote backup services can reduce backup by only transmitting changed binary data bits

Disadvantages

Remote backup has some disadvantages over traditional backup methods:

  • Depending on the available network bandwidth, the restoration of data can be slow. Because data is stored offsite, the data must be recovered either via the Internet or via a disk shipped from the online backup service provider.
  • Some backup service providers have no guarantee that stored data will be kept private — for example, from employees. As such, most recommend that files be encrypted.
  • It is possible that a remote backup service provider could go out of business or be purchased, which may affect the accessibility of one’s data or the cost to continue using the service.
  • If the encryption password is lost, data recovery will be impossible. However with managed services this should not be a problem.
  • Residential broadband services often have monthly limits that preclude large backups. They are also usually asymmetric; the user-to-network link regularly used to store backups is much slower than the network-to-user link used only when data is restored.
  • In terms of price, when looking at the raw cost of hard disks, remote backups cost about 1-20 times per GB what a local backup would.

Managed vs. unmanaged

Some services provide expert backup management services as part of the overall offering. These services typically include:

  • Assistance configuring the initial backup
  • Continuous monitoring of the backup processes on the client machines to ensure that backups actually happen
  • Proactive alerting in the event that any backups fail
  • Assistance in restoring and recovering data

Disaster recovery

Disaster recovery (DR) involves a set of policies and procedures to enable the recovery or continuation of vital technology infrastructure and systems following a natural or human-induced disaster. Disaster recovery focuses on the IT or technology systems supporting critical business functions, as opposed to business continuity, which involves keeping all essential aspects of a business functioning despite significant disruptive events. Disaster recovery is therefore a subset of business continuity.

Classification of disasters

Disasters can be classified into two broad categories. The first is natural disasters such as floods, hurricanes, tornadoes or earthquakes. While preventing a natural disaster is very difficult, risk management measures such as avoiding disaster-prone situations and good planning can help. The second category is man made disasters, such as hazardous material spills, infrastructure failure, bio-terrorism, and disastrous IT bugs or failed change implementations. In these instances, surveillance, testing and mitigation planning are invaluable.

Importance of disaster recovery planning

Recent research supports the idea that implementing a more holistic pre-disaster planning approach is more cost-effective in the long run. Every $1 spent on hazard mitigation(such as a disaster recovery plan)saves society $4 in response and recovery costs.

As IT systems have become increasingly critical to the smooth operation of a company, and arguably the economy as a whole, the importance of ensuring the continued operation of those systems, and their rapid recovery, has increased. For example, of companies that had a major loss of business data, 43% never reopen and 29% close within two years. As a result, preparation for continuation or recovery of systems needs to be taken very seriously. This involves a significant investment of time and money with the aim of ensuring minimal losses in the event of a disruptive event.

Control measures

Control measures are steps or mechanisms that can reduce or eliminate various threats for organizations. Different types of measures can be included in disaster recovery plan (DRP).

Disaster recovery planning is a subset of a larger process known as business continuity planning and includes planning for resumption of applications, data, hardware, electronic communications (such as networking) and other IT infrastructure. A business continuity plan (BCP) includes planning for non-IT related aspects such as key personnel, facilities, crisis communication and reputation protection, and should refer to the disaster recovery plan (DRP) for IT related infrastructure recovery / continuity.

IT disaster recovery control measures can be classified into the following three types:

  1. Preventive measures – Controls aimed at preventing an event from occurring.
  2. Detective measures – Controls aimed at detecting or discovering unwanted events.
  3. Corrective measures – Controls aimed at correcting or restoring the system after a disaster or an event.

Good disaster recovery plan measures dictate that these three types of controls be documented and exercised regularly using so-called “DR tests”.

Strategies

Prior to selecting a disaster recovery strategy, a disaster recovery planner first refers to their organization’s business continuity plan which should indicate the key metrics of recovery point objective (RPO) and recovery time objective (RTO) for various business processes (such as the process to run payroll, generate an order, etc.). The metrics specified for the business processes are then mapped to the underlying IT systems and infrastructure that support those processes.

Incomplete RTOs and RPOs can quickly derail a disaster recovery plan. Every item in the DR plan requires a defined recovery point and time objective, as failure to create them may lead to significant problems that can extend the disaster’s impact. Once the RTO and RPO metrics have been mapped to IT infrastructure, the DR planner can determine the most suitable recovery strategy for each system. The organization ultimately sets the IT budget and therefore the RTO and RPO metrics need to fit with the available budget. While most business unit heads would like zero data loss and zero time loss, the cost associated with that level of protection may make the desired high availability solutions impractical. A cost-benefit analysis often dictates which disaster recovery measures are implemented.

Some of the most common strategies for data protection include:

  • backups made to tape and sent off-site at regular intervals
  • backups made to disk on-site and automatically copied to off-site disk, or made directly to off-site disk
  • replication of data to an off-site location, which overcomes the need to restore the data (only the systems then need to be restored or synchronized), often making use of storage area network (SAN) technology
  • Hybrid Cloud solutions that replicate both on-site and to off-site data centers. These solutions provide the ability to instantly fail-over to local on-site hardware, but in the event of a physical disaster, servers can be brought up in the cloud data centers as well. Examples include Quorom, rCloud from Persistent Systems or EverSafe.
  • the use of high availability systems which keep both the data and system replicated off-site, enabling continuous access to systems and data, even after a disaster (often associated with cloud storage)

In many cases, an organization may elect to use an outsourced disaster recovery provider to provide a stand-by site and systems rather than using their own remote facilities, increasingly via cloud computing.

In addition to preparing for the need to recover systems, organizations also implement precautionary measures with the objective of preventing a disaster in the first place. These may include:

  • local mirrors of systems and/or data and use of disk protection technology such as RAID
  • surge protectors — to minimize the effect of power surges on delicate electronic equipment
  • use of an uninterruptible power supply (UPS) and/or backup generator to keep systems going in the event of a power failure
  • fire prevention/mitigation systems such as alarms and fire extinguishers
  • anti-virus software and other security measures

Basic operations in OpenVZ environment

This article assumes you have already installed OpenVZ. If not, follow the link to perform the steps needed.

Create and start a container

To create and start a container, run the following commands:

[host-node]# vzctl create CTID --ostemplate osname
[host-node]# vzctl set CTID --ipadd a.b.c.d --save
[host-node]# vzctl set CTID --nameserver a.b.c.d --save
[host-node]# vzctl start CTID

Here CTID is the numeric ID for the container; osname is the name of the OS template for the container, and a.b.c.d is the IP address to be assigned to the container.

Example:

[host-node]# vzctl create 101 --ostemplate fedora-core-5-minimal
[host-node]# vzctl set 101 --ipadd 10.1.2.3 --save
[host-node]# vzctl set 101 --nameserver 10.0.2.1 --save
[host-node]# vzctl start 101

Your freshly-created container should be up and running now; you can see its processes:

[host-node]# vzctl exec CTID ps ax

Enter to and exit from the container

To enter container give the following command:

[host-node]# vzctl enter CTID
entered into container CTID
[container]#

To exit from container, just type exit and press Enter:

[container]# exit
exited from container VEID
[host-node]#

Stop and destroy the container

To stop container:

[host-node]# vzctl stop CTID
Stopping container ...
Container was stopped
Container is unmounted

And to destroy container:

[host-node]# vzctl destroy CTID
Destroying container private area: /vz/private/CTID
Container private area was destroyed

OpenVZ installation

Requirements

This guide assumes you are running RHEL (CentOS, Scientific Linux) 6 on your system. Currently, this is a recommended platform to run OpenVZ on.

/vz file system

It is recommended to use a separate partition for containers (by default /vz) and format it to ext4.

yum pre-setup

Download openvz.repo file and put it to your /etc/yum.repos.d/ repository:

wget -P /etc/yum.repos.d/ http://ftp.openvz.org/openvz.repo

Import OpenVZ GPG key used for signing RPM packages:

rpm --import http://ftp.openvz.org/RPM-GPG-Key-OpenVZ

Kernel installation

Limited OpenVZ functionality is supported when you run a recent 3.x kernel (check vzctl for upstream kernel, so OpenVZ kernel installation is optional but still recommended.

# yum install vzkernel

System configuration

Please make sure the following steps are performed before rebooting into OpenVZ kernel.

sysctl

There are a number of kernel parameters that should be set for OpenVZ to work correctly. These parameters are stored in /etc/sysctl.conf file. Here are the relevant portions of the file; please edit accordingly.

# On Hardware Node we generally need
# packet forwarding enabled and proxy arp disabled
net.ipv4.ip_forward = 1
net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.all.forwarding = 1
net.ipv4.conf.default.proxy_arp = 0

# Enables source route verification
net.ipv4.conf.all.rp_filter = 1

# Enables the magic-sysrq key
kernel.sysrq = 1

# We do not want all our interfaces to send redirects
net.ipv4.conf.default.send_redirects = 1
net.ipv4.conf.all.send_redirects = 0

SELinux

SELinux should be disabled. Put SELINUX=disabled to /etc/sysconfig/selinux:

echo "SELINUX=disabled" > /etc/sysconfig/selinux

Tools installation

Before installing tools, please read about vzstats and opt-out if you don’t want to help the project.

OpenVZ needs some user-level tools installed:

# yum install vzctl vzquota ploop

Reboot into OpenVZ

Now reboot the machine and choose “OpenVZ” on the boot loader menu (it should be default choice).

Download OS templates

An OS template is a Linux distribution installed into a container and then packed into a gzipped tarball. Using such a cache, a new container can be created in a minute.

Download precreated template directly from download.openvz.org/template/precreated, or from one of the mirrors. Put those tarballs as-is (no unpacking needed) to the /vz/template/cache/ directory.

Rackspace joins the OpenPower Foundation

Gigaom

Rackspace is now an official member of the OpenPower Foundation, the IBM-created organization whose job is to help oversee IBM’s open-source chips; these chips are posed to give Intel’s x86 chips a run for their money. The cloud provider said in a blog post Tuesday that it will be working with partners to “to design and build an OpenPOWER-based, Open Compute platform” that it eventually aims to put into production. Rackspace now joins Google, Canonical, Nvidia and Samsung as another OpenPower member. In early October, IBM announced a new OpenPower-certified server for webscale-centric companies that comes with an IBM Power8 processor and Nvidia’s GPU accelerator.

View original post

The Pirate Bay shutdown: the whole story (so far)

For the past decade, if you wanted to download copyrighted material and didn’t want to pay for it, it’s likely you turned to The Pirate Bay. Up until a police raid took it offline last week, it was the most popular place to grab Sunday’s episode of The Newsroom or Gone Girl months before the Blu-ray hits stores. You didn’t have to log in to some arcane message board or know someone to get an invite — the anonymous file-sharing site was open to everybody and made piracy as simple as a Google search. That’s what scared Hollywood.

The movie industry claimed that in 2006 alone, piracy cost it some $6.1 billion dollars. Naturally, it went after the biggest target to exact its revenge: the Sweden-based Pirate Bay. Given Sweden’s lax laws regarding copyrighted materials, Hollywood had to enlist the United States government for help cracking down on the site. The US threatened that unless something was done to take the site offline, it’d impose trade sanctions against Sweden by way of The World Trade Organization. That led to Swedish police raiding the outfit in 2006, confiscating enough servers and computer equipment to fill three trucks and making two arrests. Three days later, the site was back up and running and more popular than ever before thanks to a swell of mainstream media coverage.

Read rest of the story here

How do I find out Linux Resource utilization to detect system bottlenecks?

Q. How can I find out Linux Resource utilization using vmstat command? How do I get information about high disk I/O and memory usage?

A. vmstat command reports information about processes, memory, paging, block IO, traps, and cpu activity. However, a real advantage of vmstat command output – is to the point and (concise) easy to read/understand. The output of vmstat command use to help identify system bottlenecks. Please note that Linux vmstat does not count itself as a running process.

Here is an output of vmstat command from my enterprise grade system:
$ vmstat -S M

Output:

procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa
3  0      0   1963    607   2359    0    0     0     0    0     1 32  0 68  0

Where,

  • The fist line is nothing but six different categories. The second line gives more information about each category. This second line gives all data you need.
  • -S M: vmstat lets you choose units (k, K, m, M) default is K (1024 bytes) in the default mode. I am using M since this system has over 4 GB memory. Without -M option it will use K as unit

$ vmstat

Output:

procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa
3  0      0 2485120 621952 2415368  0    0     0     0    0     1 32  0 68  0

Field Description For Vm Mode

(a) procs is the process-related fields are:

  • r: The number of processes waiting for run time.
  • b: The number of processes in uninterruptible sleep.

(b) memory is the memory-related fields are:

  • swpd: the amount of virtual memory used.
  • free: the amount of idle memory.
  • buff: the amount of memory used as buffers.
  • cache: the amount of memory used as cache.

(c) swap is swap-related fields are:

  • si: Amount of memory swapped in from disk (/s).
  • so: Amount of memory swapped to disk (/s).

(d) io is the I/O-related fields are:

  • bi: Blocks received from a block device (blocks/s).
  • bo: Blocks sent to a block device (blocks/s).

(e) system is the system-related fields are:

  • in: The number of interrupts per second, including the clock.
  • cs: The number of context switches per second.

(f) cpu is the CPU-related fields are:

These are percentages of total CPU time.

  • us: Time spent running non-kernel code. (user time, including nice time)
  • sy: Time spent running kernel code. (system time)
  • id: Time spent idle. Prior to Linux 2.5.41, this includes IO-wait time.
  • wa: Time spent waiting for IO. Prior to Linux 2.5.41, shown as zero.

As you see the first output produced gives averages data since the last reboot. Additional reports give information on a sampling period of length delay. You need to sample data using delays i.e. collect data by setting intervals. For example collect data every 2 seconds (or collect data every 2 second 5 times only):
$ vmstat -S M 2
OR
$ vmstat -S M 2 5

Output:

procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa
3  0      0   1756    607   2359    0    0     0     0    0     1 32  0 68  0
3  0      0   1756    607   2359    0    0     0     0 1018    65 38  0 62  0
3  0      0   1756    607   2359    0    0     0     0 1011    64 37  0 63  0
3  0      0   1756    607   2359    0    0     0    20 1018    72 37  0 63  0
3  0      0   1756    607   2359    0    0     0     0 1012    64 37  0 62  0
3  0      0   1756    607   2359    0    0     0     0 1011    65 38  0 63  0
3  0      0   1995    607   2359    0    0     0     0 1012    62 35  2 63  0
3  0      0   1731    607   2359    0    0     0     0 1012    64 34  3 62  0
3  0      0   1731    607   2359    0    0     0     0 1013    72 38  0 62  0
3  0      0   1731    607   2359    0    0     0     0 1013    63 37  0 63  0

This is what most system administrators do to identify system bottlenecks. I hope all of you find vmstat data is concise and easy to read.

30 Handy Bash Shell Aliases

An alias is nothing but shortcut to commands. The alias command allows user to launch any command or group of commands (including options and filenames) by entering a single word. Use alias command to display list of all defined aliases. You can add user defined aliases to ~/.bashrc file. You can cut down typing time with these aliases, work smartly, and increase productivity at the command prompt.

More about aliases

The general syntax for the alias command for the bash shell is as follows.

Task: List aliases

Type the following command:

alias

Sample outputs:

alias ..='cd ..'
alias amazonbackup='s3backup'
alias apt-get='sudo apt-get'

By default alias command shows a list of aliases that are defined for the current user.

Task: Define / create an alias (bash syntax)

To create the alias use the following syntax:

alias name=value
alias name='command'
alias name='command arg1 arg2'
alias name='/path/to/script'
alias name='/path/to/script.pl arg1'

In this example, create the alias c for the commonly used clear command, which clears the screen, by typing the following command and then pressing the ENTER key:

alias c='clear'

Then, to clear the screen, instead of typing clear, you would only have to type the letter ‘c’ and press the [ENTER] key:

c

Task: Disable an alias temporarily (bash syntax)

An alias can be disabled temporarily using the following syntax:

## path/to/full/command
/usr/bin/clear
## call alias with a backslash ##
\c

Task: Remove an alias (bash syntax)

You need to use the command called unalias to remove aliases. Its syntax is as follows:

unalias aliasname

In this example, remove the alias c which was created in an earlier example:

unalias c

You also need to delete the alias from the ~/.bashrc file using a text editor (see next section).

Task: Make aliases permanent (bash syntax)

The alias c remains in effect only during the current login session. Once you logs out or reboot the system the alias c will be gone. To avoid this problem, add alias to your ~/.bashrc file, enter:

vi ~/.bashrc

The alias c for the current user can be made permanent by entering the following line:

alias c='clear'

Save and close the file. System-wide aliases (i.e. aliases for all users) can be put in the /etc/bashrc file. Please note that the alias command is built into a various shells including ksh, tcsh/csh, ash, bash and others.

A note about privileged access

You can add code as follows in ~/.bashrc:

# if user is not root, pass all commands via sudo #
if [ $UID -ne 0 ]; then
    alias reboot='sudo reboot'
    alias update='sudo apt-get upgrade'
fi

A note about os specific aliases

You can add code as follows in ~/.bashrc using the case statement:

### Get os name via uname ###
_myos="$(uname)"
 
### add alias as per os using $_myos ###
case $_myos in
   Linux) alias foo='/path/to/linux/bin/foo';;
   FreeBSD|OpenBSD) alias foo='/path/to/bsd/bin/foo' ;;
   SunOS) alias foo='/path/to/sunos/bin/foo' ;;
   *) ;;
esac

30 uses for aliases

You can define various types aliases as follows to save time and increase productivity.

#1: Control ls command output

The ls command lists directory contents and you can colorize the output:

## Colorize the ls output ##
alias ls='ls --color=auto'
 
## Use a long listing format ##
alias ll='ls -la'
 
## Show hidden files ##
alias l.='ls -d .* --color=auto'

#2: Control cd command behavior

## get rid of command not found ##
alias cd..='cd ..'
 
## a quick way to get out of current directory ##
alias ..='cd ..'
alias ...='cd ../../../'
alias ....='cd ../../../../'
alias .....='cd ../../../../'
alias .4='cd ../../../../'
alias .5='cd ../../../../..'

#3: Control grep command output

grep command is a command-line utility for searching plain-text files for lines matching a regular expression:

## Colorize the grep command output for ease of use (good for log files)##
alias grep='grep --color=auto'
alias egrep='egrep --color=auto'
alias fgrep='fgrep --color=auto'

#4: Start calculator with math support

alias bc='bc -l'

#4: Generate sha1 digest

alias sha1='openssl sha1'

#5: Create parent directories on demand

mkdir command is used to create a directory:

alias mkdir='mkdir -pv'

#6: Colorize diff output

You can compare files line by line using diff and use a tool called colordiff to colorize diff output:

# install  colordiff package 
alias diff='colordiff'

#7: Make mount command output pretty and human readable format

alias mount='mount |column -t'

#8: Command short cuts to save time

# handy short cuts #
alias h='history'
alias j='jobs -l'

#9: Create a new set of commands

alias path='echo -e ${PATH//:/\\n}'
alias now='date +"%T"'
alias nowtime=now
alias nowdate='date +"%d-%m-%Y"'

#10: Set vim as default

alias vi=vim
alias svi='sudo vi'
alias vis='vim "+set si"'
alias edit='vim'

#11: Control output of networking tool called ping

# Stop after sending count ECHO_REQUEST packets #
alias ping='ping -c 5'
# Do not wait interval 1 second, go fast #
alias fastping='ping -c 100 -s.2'

#12: Show open ports

Use netstat command to quickly list all TCP/UDP port on the server:

alias ports='netstat -tulanp'

#13: Wakeup sleeping servers

Wake-on-LAN (WOL) is an Ethernet networking standard that allows a server to be turned on by a network message. You can quickly wakeup nas devices and server using the following aliases:

## replace mac with your actual server mac address #
alias wakeupnas01='/usr/bin/wakeonlan 00:11:32:11:15:FC'
alias wakeupnas02='/usr/bin/wakeonlan 00:11:32:11:15:FD'
alias wakeupnas03='/usr/bin/wakeonlan 00:11:32:11:15:FE'

#14: Control firewall (iptables) output

Netfilter is a host-based firewall for Linux operating systems. It is included as part of the Linux distribution and it is activated by default. This post list most common iptables solutions required by a new Linux user to secure his or her Linux operating system from intruders.

## shortcut  for iptables and pass it via sudo#
alias ipt='sudo /sbin/iptables'
 
# display all rules #
alias iptlist='sudo /sbin/iptables -L -n -v --line-numbers'
alias iptlistin='sudo /sbin/iptables -L INPUT -n -v --line-numbers'
alias iptlistout='sudo /sbin/iptables -L OUTPUT -n -v --line-numbers'
alias iptlistfw='sudo /sbin/iptables -L FORWARD -n -v --line-numbers'
alias firewall=iptlist

#15: Debug web server / cdn problems with curl

# get web server headers #
alias header='curl -I'
 
# find out if remote server supports gzip / mod_deflate or not #
alias headerc='curl -I --compress'

#16: Add safety nets

# do not delete / or prompt if deleting more than 3 files at a time #
alias rm='rm -I --preserve-root'
 
# confirmation #
alias mv='mv -i'
alias cp='cp -i'
alias ln='ln -i'
 
# Parenting changing perms on / #
alias chown='chown --preserve-root'
alias chmod='chmod --preserve-root'
alias chgrp='chgrp --preserve-root'

#17: Update Debian Linux server

apt-get command is used for installing packages over the internet (ftp or http). You can also upgrade all packages in a single operations:

# distro specific  - Debian / Ubuntu and friends #
# install with apt-get
alias apt-get="sudo apt-get"
alias updatey="sudo apt-get --yes"
 
# update on one command 
alias update='sudo apt-get update && sudo apt-get upgrade'

#18: Update RHEL / CentOS / Fedora Linux server

yum command is a package management tool for RHEL / CentOS / Fedora Linux and friends:

## distrp specifc RHEL/CentOS ##
alias update='yum update'
alias updatey='yum -y update'

#19: Tune sudo and su

# become root #
alias root='sudo -i'
alias su='sudo -i'

#20: Pass halt/reboot via sudo

shutdown command bring the Linux / Unix system down:

# reboot / halt / poweroff
alias reboot='sudo /sbin/reboot'
alias poweroff='sudo /sbin/poweroff'
alias halt='sudo /sbin/halt'
alias shutdown='sudo /sbin/shutdown'

#21: Control web servers

# also pass it via sudo so whoever is admin can reload it without calling you #
alias nginxreload='sudo /usr/local/nginx/sbin/nginx -s reload'
alias nginxtest='sudo /usr/local/nginx/sbin/nginx -t'
alias lightyload='sudo /etc/init.d/lighttpd reload'
alias lightytest='sudo /usr/sbin/lighttpd -f /etc/lighttpd/lighttpd.conf -t'
alias httpdreload='sudo /usr/sbin/apachectl -k graceful'
alias httpdtest='sudo /usr/sbin/apachectl -t && /usr/sbin/apachectl -t -D DUMP_VHOSTS'

#22: Alias into our backup stuff

# if cron fails or if you want backup on demand just run these commands # 
# again pass it via sudo so whoever is in admin group can start the job #
# Backup scripts #
alias backup='sudo /home/scripts/admin/scripts/backup/wrapper.backup.sh --type local --taget /raid1/backups'
alias nasbackup='sudo /home/scripts/admin/scripts/backup/wrapper.backup.sh --type nas --target nas01'
alias s3backup='sudo /home/scripts/admin/scripts/backup/wrapper.backup.sh --type nas --target nas01 --auth /home/scripts/admin/.authdata/amazon.keys'
alias rsnapshothourly='sudo /home/scripts/admin/scripts/backup/wrapper.rsnapshot.sh --type remote --target nas03 --auth /home/scripts/admin/.authdata/ssh.keys --config /home/scripts/admin/scripts/backup/config/adsl.conf'
alias rsnapshotdaily='sudo  /home/scripts/admin/scripts/backup/wrapper.rsnapshot.sh --type remote --target nas03 --auth /home/scripts/admin/.authdata/ssh.keys  --config /home/scripts/admin/scripts/backup/config/adsl.conf'
alias rsnapshotweekly='sudo /home/scripts/admin/scripts/backup/wrapper.rsnapshot.sh --type remote --target nas03 --auth /home/scripts/admin/.authdata/ssh.keys  --config /home/scripts/admin/scripts/backup/config/adsl.conf'
alias rsnapshotmonthly='sudo /home/scripts/admin/scripts/backup/wrapper.rsnapshot.sh --type remote --target nas03 --auth /home/scripts/admin/.authdata/ssh.keys  --config /home/scripts/admin/scripts/backup/config/adsl.conf'
alias amazonbackup=s3backup

#23: Desktop specific – play avi/mp3 files on demand

## play video files in a current directory ##
# cd ~/Download/movie-name 
# playavi or vlc 
alias playavi='mplayer *.avi'
alias vlc='vlc *.avi'
 
# play all music files from the current directory #
alias playwave='for i in *.wav; do mplayer "$i"; done'
alias playogg='for i in *.ogg; do mplayer "$i"; done'
alias playmp3='for i in *.mp3; do mplayer "$i"; done'
 
# play files from nas devices #
alias nplaywave='for i in /nas/multimedia/wave/*.wav; do mplayer "$i"; done'
alias nplayogg='for i in /nas/multimedia/ogg/*.ogg; do mplayer "$i"; done'
alias nplaymp3='for i in /nas/multimedia/mp3/*.mp3; do mplayer "$i"; done'
 
# shuffle mp3/ogg etc by default #
alias music='mplayer --shuffle *'

#24: Set default interfaces for sys admin related commands

vnstat is console-based network traffic monitor. dnstop is console tool to analyze DNS traffic.tcptrack and iftop commands displays information about TCP/UDP connections it sees on a network interface and display bandwidth usage on an interface by host respectively.

## All of our servers eth1 is connected to the Internets via vlan / router etc  ##
alias dnstop='dnstop -l 5  eth1'
alias vnstat='vnstat -i eth1'
alias iftop='iftop -i eth1'
alias tcpdump='tcpdump -i eth1'
alias ethtool='ethtool eth1'
 
# work on wlan0 by default #
# Only useful for laptop as all servers are without wireless interface
alias iwconfig='iwconfig wlan0'

#25: Get system memory, cpu usage, and gpu memory info quickly

## pass options to free ## 
alias meminfo='free -m -l -t'
 
## get top process eating memory
alias psmem='ps auxf | sort -nr -k 4'
alias psmem10='ps auxf | sort -nr -k 4 | head -10'
 
## get top process eating cpu ##
alias pscpu='ps auxf | sort -nr -k 3'
alias pscpu10='ps auxf | sort -nr -k 3 | head -10'
 
## Get server cpu info ##
alias cpuinfo='lscpu'
 
## older system use /proc/cpuinfo ##
##alias cpuinfo='less /proc/cpuinfo' ##
 
## get GPU ram on desktop / laptop## 
alias gpumeminfo='grep -i --color memory /var/log/Xorg.0.log'

#26: Control Home Router

The curl command can be used to reboot Linksys routers.

# Reboot my home Linksys WAG160N / WAG54 / WAG320 / WAG120N Router / Gateway from *nix.
alias rebootlinksys="curl -u 'admin:my-super-password' 'http://192.168.1.2/setup.cgi?todo=reboot'"
 
# Reboot tomato based Asus NT16 wireless bridge 
alias reboottomato="ssh admin@192.168.1.1 /sbin/reboot"

#27 Resume wget by default

The GNU Wget is a free utility for non-interactive download of files from the Web. It supports HTTP, HTTPS, and FTP protocols, and it can resume downloads too:

## this one saved by butt so many times ##
alias wget='wget -c'

#28 Use different browser for testing website

## this one saved by butt so many times ##
alias ff4='/opt/firefox4/firefox'
alias ff13='/opt/firefox13/firefox'
alias chrome='/opt/google/chrome/chrome'
alias opera='/opt/opera/opera'
 
#default ff 
alias ff=ff13
 
#my default browser 
alias browser=chrome

#29: A note about ssh alias

Do not create ssh alias, instead use ~/.ssh/config OpenSSH SSH client configuration files. It offers more option. An example:

Host server10
  Hostname 1.2.3.4
  IdentityFile ~/backups/.ssh/id_dsa
  user foobar
  Port 30000
  ForwardX11Trusted yes
  TCPKeepAlive yes

You can now connect to peer1 using the following syntax:
$ ssh server10

#30: It’s your turn to share…

## set some other defaults ##
alias df='df -H'
alias du='du -ch'
 
# top is atop, just like vi is vim
alias top='atop'
 
## nfsrestart  - must be root  ##
## refresh nfs mount / cache etc for Apache ##
alias nfsrestart='sync && sleep 2 && /etc/init.d/httpd stop && umount netapp2:/exports/http && sleep 2 && mount -o rw,sync,rsize=32768,wsize=32768,intr,hard,proto=tcp,fsc natapp2:/exports /http/var/www/html &&  /etc/init.d/httpd start'
 
## Memcached server status  ##
alias mcdstats='/usr/bin/memcached-tool 10.10.27.11:11211 stats'
alias mcdshow='/usr/bin/memcached-tool 10.10.27.11:11211 display'
 
## quickly flush out memcached server ##
alias flushmcd='echo "flush_all" | nc 10.10.27.11 11211'
 
## Remove assets quickly from Akamai / Amazon cdn ##
alias cdndel='/home/scripts/admin/cdn/purge_cdn_cache --profile akamai'
alias amzcdndel='/home/scripts/admin/cdn/purge_cdn_cache --profile amazon'
 
## supply list of urls via file or stdin
alias cdnmdel='/home/scripts/admin/cdn/purge_cdn_cache --profile akamai --stdin'
alias amzcdnmdel='/home/scripts/admin/cdn/purge_cdn_cache --profile amazon --stdin'

Conclusion

This post summaries several types of uses for *nix bash aliases:

  1. Setting default options for a command (e.g. set eth0 as default option – alias ethtool=’ethtool eth0′ ).
  2. Correcting typos (cd.. will act as cd .. via alias cd..=’cd ..’).
  3. Reducing the amount of typing.
  4. Setting the default path of a command that exists in several versions on a system (e.g. GNU/grep is located at /usr/local/bin/grep and Unix grep is located at /bin/grep. To use GNU grep use alias grep=’/usr/local/bin/grep’ ).
  5. Adding the safety nets to Unix by making commands interactive by setting default options. (e.g. rm, mv, and other commands).
  6. Compatibility by creating commands for older operating systems such as MS-DOS or other Unix like operating systems (e.g. alias del=rm ).

How Organizations Are Authenticated for SSL Certificates

Certification Authorities (CAs) are trusted third parties that authenticate customers before issuing SSL certificates to secure their servers.

Exactly how do CAs authenticate these organizations? And where are the rules that determine what CAs must do during authentication?

The Rules on Customer Authentication

In the past, there were no common rules applicable to CAs as to minimum steps required to authenticate a customer before issuing an SSL certificate. Instead, each CA was permitted to create its own authentication processes, and was only required to describe the process in general terms in its public Certification Practice Statement (CPS). In many cases, the CPS authentication description was vague and hard to understand, and some CAs were less diligent than others during authentication.

To raise the bar for customer authentication, CAs first developed new, common, more stringent authentication standards for their new Extended Validation (EV) certificates, which earn a favorable “green bar” user interface in most browsers and applications. These minimum authentication standards were detailed in the CA/Browser Forum’s 2008 Guidelines for The Issuance and Management of Extended Validation Certificates. In 2012, additional authentication requirements were added for all SSL certificates in the CA/Browser Forum’s Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates. See documents at https://www.cabforum.org/documents.html.

All public CAs must follow these same authentication rules when issuing SSL certificates.

Requirements Common to All Certificates

First of all, there are authentication and technical requirements applicable to all certificates. A CA must check the customer’s certificate signing request to make sure it meets minimum cryptographic algorithm and key size, and is not based on Debian known weak keys.

A CA must then check the customer and certificate data against a list of high risk applicants (organizations or domains that are most commonly targeted in phishing and other fraudulent schemes), and perform extra authentication steps as appropriate. The CA must also check the customer against internal databases maintained by the CA that include previously revoked certificates and certificate requests previously rejected due to suspected phishing or other fraudulent usage. Finally, the CA must confirm that the customer and its location are not identified on any government denied list of prohibited persons, organizations, or countries.

These basic CA checks are one major advantage of CA-issued certificates over self-signed certificates (including DANE certificates), which are not subject to any similar public safeguards.

Simplest Authentication – Domain Validation (DV) Certificates

After these basic checks, the simplest level of authentication occurs for Domain Validation or DV certificates. These certificates do not confirm the identity of the certificate holder, but they do confirm that the certificate holder owns or controls the domains included inside the DV certificate.

DV validation is usually performed using an automated method where the CA sends an email message to the customer containing a confirming URL link and using a limited list of emails addresses. The only email addresses that CAs are allowed to use for domain confirmation are:

  1. Any contact email addresses for the domain shown in the WhoIs record, or
  2. An email addressed to one of five permitted prefixes (admin@, administrator@, hostmaster@, postmaster@, and webmaster@) attached to the domain being confirmed.

The idea is that only a customer that owns or controls a domain can receive and respond to email messages sent to these email addresses. Domain control can also be established by a manual lookup in WhoIs, by requiring the customer to make an agreed-upon change to a web page secured by the domain, or by obtaining confirmation from the domain name registrar.

Some CAs take additional steps during DV validation, such as making an automated telephone call to the customer’s phone number and checking the customer’s credit card number, in order to establish potential points of contact for future reference. The CA is also permitted to insert a country code in the certificate along with all verified domains if the CA has confirmed that the customer’s IP address is associated with the country, but no identifying information about the customer (such as organization name) is included in DV certificates.

The tests listed above – Requirements Common to All Certificates and requiring proof of domain ownership or control in the DNS – are one major advantage of CA-issued certificates over self-signed certificates (including DANE certificates), which are not subject to any similar public safeguards.

Next Level of Authentication – Organization Validation (OV) Certificates

The next level of customer authentication is for Organization Validation (OV) certificates. OV certificate validation involves all the same steps for DV certificates, plus the CA takes additional steps to confirm the identity and location of the customer and includes the information inside the OV certificate before issuance.

During OV validation, the CA first looks for data confirming the customer’s identity, address, and phone number in any of the following information sources: a third party business data base (such as Dun & Bradstreet), a government business registry (such as the Corporation Commissioner’s website), a letter from a licensed attorney or accountant vouching for the customer’s identity (the CA must follow up to confirm the attorney or accountant is duly licensed and actually signed the letter), or a site visit to the customer by the CA or its agent. A CA can also use a copy of a customer document, such as a utility bill, credit card bill, bank statement, etc. to confirm the customer’s address, but not the customer’s identity.

The CA must then confirm that the contact person who is requesting the OV certificate is really connected with the customer organization (e.g., an employee or agent) that will be listed inside the certificate. To do this, the CA typically places a telephone call to the contact person using a telephone number for the organization found in a public data base (not using a phone number offered by the contact person, which might simply be the person’s mobile phone number). If the contact person representing the organization can be reached through the organization’s main phone number, the link is confirmed and the CA can presume the contact person has authority to order a certificate for the organization.

Other alternatives for confirming the link between the customer contact person and the organization include mailing a letter or sending a courier package to the person at the organization’s official address with a password that is then entered by the contact person on the CA’s order page, or placing a call to a senior official with the organization to confirm the authority of the contact person to order a certificate.

Once OV authentication has been completed, the CA will include an organization name, city, state or province, and country inside the OV certificate, along with one or more domains the customer owns or controls.

Highest Level of Authentication – Extended Validation (EV) Certificates

EV certificates represent the highest level of authentication, and are typically rewarded with a favorable user interface by the browsers and applications.

For EV authentication, the CA must conform that the customer’s organization is properly registered as “active” with the appropriate government registry and can also be found in a third party business data base. For companies in existence for less than three years which cannot be found in a business data base, the CA must take additional steps such as requiring an attorney or accountant letter or confirming the customer maintains a bank account.

In addition, the CA must contact the person who signs the Subscriber Agreement (i.e., in most cases, the person who clicks “I Agree” on the CA’s website) to verify the signature, and must independently confirm the name, title, and agency of that person within the organization, typically by finding the person listed as a company officer in a public business data base, by calling the organization’s HR department through a publicly listed telephone number for the organization, or by receiving a confirming attorney or accountant letter that is independently verified.

The CA must further check the name and domain submitted by the customer to make sure it does not contain a mixed data set (for example, to make sure the Cyrillic letter “a” has not been inserted in place of the Western letter “a”, which can be used for fraudulent purposes). Finally, the EV vetter must compare all authentication data to confirm consistency (e.g., make sure all customer data contains the same address, etc.), and conduct final cross-correlation and due diligence to look for anomalies. The entire vetting file must then be independently reviewed and approved by a second vetter before an EV certificate is issued.

Three percent of all OV and EV vetting files must be reviewed each year by another authorized vetter for quality control purposes.

When an EV certificate is issued, it contains not only the standard OV information fields (organization name, city, state or province, and country), but also the entity type (corporation, government agency, etc.), location of incorporation (state, province, or country), and government registration number so that the customer is uniquely identified to the public. Most browsers display the EV certificate organization name and country of incorporation in the user interface “green bar” visible to the public at the customer’s encrypted https:// web page secured by the EV certificate.

Conclusion: CA Authentication Is a Valuable Safeguard for the Public

Because CA certificates are issued from roots trusted by all the major browsers (which provide trust indicators to the public at the customer’s secure pages), it is important that a third party first verify the technical strength of the certificate and the customer’s ownership and control of the domains included in the certificate (for DV certificates), as well as the identity of the customer (for OV and EV certificates). This helps prevent web site impersonation and fraud.

In addition, all CA-issued certificates protect the public in another way — by triggering a browser warning if the domain contained in a certificate does not match the domain visited by the public (so a warning is displayed if someone visits https://www.angel.com and the certificate securing the site is issued to a different domain, http://www.evil.com). This helps ensure that a user won’t be fooled as to the true domain of the secure web page being visited.

What Are the Different Types of SSL Certificates?

Domain Validation (DV)

A Domain Validated SSL certificate is issued after proof that the owner has the right to use their domain is established. This is typically done by the CA sending an email to the domain owner (as listed in a WHOIS database). Once the owner responds, the certificate is issued. Many CAs perform additional fraud checks to minimize issuance of a certificate to a domain which may be similar to a high value domain (i.e. Micros0ft.com, g00gle.com, b0fay.com). The certificate only contains the domain name. Because of the minimal checks performed, this certificate is typically issued quicker than other types of certificates. While the browser displays a padlock, examination of the certificate will not show the company name as this was not validated.

Organizational Validation (OV)

For OV certificates, CAs must validate the company name, domain name and other information through the use of public databases. CAs may also use additional methods to insure the information inserted into the certificate is accurate. The issued certificate will contain the company name and the domain name for which the certificate was issued. Because of these additional checks, this is the minimum certificate recommended for ecommerce transactions as it provides the consumer with additional information about the business.

Extended Validation (EV)

EV Certificates are only issued once an entity passes a strict authentication procedure. These checks are much more stringent than OV certificates. The objectives are twofold: First, identify the legal entity that controls a website: Provide a reasonable assurance to the user of an Internet browser that the website the user is accessing is controlled by a specific legal entity identified in the EV Certificate by name, address of place of business, jurisdiction of incorporation or registration and registration number or other disambiguating information.  Second, enable encrypted communications with a website: Facilitate the exchange of encryption keys in order to enable the encrypted communication of information over the Internet between the user of an Internet browser and a website (same as OV and DV).

The secondary purposes of an EV Certificate are to help establish the legitimacy of a business claiming to operate a website or distribute executable code, and to provide a vehicle that can be used to assist in addressing problems related to phishing, malware, and other forms of online identity fraud. By providing more reliable third-party verified identity and address information regarding the business, EV Certificates may help to:

  1. Make it more difficult to mount phishing and other online identity fraud attacks using Certificates.
  2. Assist companies that may be the target of phishing attacks or online identity fraud by providing them with a tool to better identify themselves to users.
  3. Assist law enforcement organizations in their investigations of phishing and other online identity fraud, including where appropriate, contacting, investigating, or taking legal action against the subject.

Because of the strict vetting procedures that CAs use to check the information about the applicant, the issuance of EV certificates usually takes longer than other types of certificates. An overview of this vetting process can be found here: https://www.cabforum.org/vetting.html. The resultant EV SSL certificate will contain the information here: https://www.cabforum.org/contents.html.

Could not chdir to home directory /root: No such file or directory Error and Solution

When I ssh into my server and login as the root user and I’m getting the following error on screen:

Could not chdir to home directory /root: No such file or directory

How do I fix this error under CentOS or Debian Linux server?

The error message is clear. /root home directory does not exists or deleted by you. If you see the following error:

Could not chdir to home directory /home/demo: No such file or directory

It means when you created a user called demo, the home directory /home/demo was not created. To fix this problem create missing directory and apply current permission. To create a directory called /root and set permission, type:
# mkdir /root
# chown root:root /root
# chmod 0700 /root

To create a directory called /home/demo and set permission, type:
# mkdir /home/demo
# chown demo:demo /home/demo
# chmod 0700 /home/demo

Try to login as demo:
# su - demo
Please note that you may need to adjust directory owner, group, and permissions as per your setup.

Finding more information about the user account

To fetch user account entries from administrative database (/etc/passwd and /etc/group), enter:
$ getent passwd demo

Sample outputs:

demo:x:1000:1000:Demo User,,,:/home/demo:/bin/bash

Where,

  1. demo: Login name / username
  2. x : Password: An x character indicates that encrypted password is stored in /etc/shadow file.
  3. 1000: User ID (UID)
  4. 1000: The primary group ID (stored in /etc/group file)
  5. Demo User: The comment field. It allow you to add extra information about the users such as user’s full name, phone number etc. This field use by finger command.
  6. /home/demo: Home directory
  7. /bin/bash: The absolute path of a command or shell (/bin/bash)

$ getent group demo

Sample outputs:

demo:x:1000:

How To Install an SSL Certificate from a Commercial Certificate Authority

Introduction

This tutorial will show you how to acquire and install an SSL certificate from a trusted, commercial Certificate Authority (CA). SSL certificates allow web servers to encrypt their traffic, and also offer a mechanism to validate server identities to their visitors. The main benefit of using a purchased SSL certificate from a trusted CA, over self-signed certificates, is that your site’s visitors will not be presented with a scary warning about not being able to verify your site’s identity.

This tutorial covers how to acquire an SSL certificate from the following trusted certificate authorities:

  • GoDaddy
  • RapidSSL (via Namecheap)

You may also use any other CA of your choice.

After you have acquired your SSL certificate, we will show you how to install it on Nginx and Apache HTTP web servers.

Prerequisites

There are several prerequisites that you should ensure before attempting to obtain an SSL certificate from a commercial CA. This section will cover what you will need in order to be issued an SSL certificate from most CAs.

Money

SSL certificates that are issued from commercial CAs have to be purchased. Free alternatives include self-signed or StartSSL certificates. Self-signed certificates are not trusted by any software and free StartSSL certificates are not trusted by some browsers.

Registered Domain Name

Before acquiring an SSL certificate, you must own or control the registered domain name that you wish to use the certificate with. If you do not already have a registered domain name, you may register one with one of the many domain name registrars out there (e.g. Namecheap, GoDaddy, etc.).

Domain Validation Rights

For the basic domain validation process, you must have access to one of the email addresses on your domain’s WHOIS record or to an “admin type” email address at the domain itself. Certificate authorities that issue SSL certificates will typically validate domain control by sending a validation email to one of the addresses on the domain’s WHOIS record, or to a generic admin email address at the domain itself. Some CAs provide alternative domain validation methods, such as DNS- or HTTP-based validation, which are outside the scope of this guide.

If you wish to be issued an Organization Validation (OV) or Extended Validation (EV) SSL certificate, you will also be required to provide the CA with paperwork to establish the legal identity of the website’s owner, among other things.

Web Server

In addition to the previously mentioned points, you will need a web server to install the SSL certificate on. This is the server that is reachable at the domain name for which the SSL certificate will be issued for. Typically, this will be an Apache HTTP, Nginx, HAProxy, or Varnish server. If you need help setting up a web server that is accessible via your registered domain name, follow these steps:

  1. Set up a web server of your choice. For example, a LEMP (Nginx) or LAMP (Apache) server–be sure to configure the web server software to use the name of your registered domain
  2. Configure your domain to use the appropriate nameservers.
  3. Add DNS records for your web server to your nameservers.

Choose Your Certificate Authority

If you are not sure of which Certificate Authority you are going to use, there are a few important factors to consider. At an overview level, the most important thing is that the CA you choose provides the features you want at a price that you are comfortable with. This section will focus more on the features that most SSL certificate buyers should be aware of, rather than prices.

Root Certificate Program Memberships

The most crucial point is that the CA that you choose is a member of the root certificate programs of the most commonly used operating systems and web browsers, i.e. it is a “trusted” CA, and its root certificate is trusted by common browsers and other software. If your website’s SSL certificate is signed by a trusted” CA, its identity is considered to be valid by software that trusts the CA–this is in contrast to self-signed SSL certificates, which also provide encryption capabilities but are accompanied by identity validation warnings that are off-putting to most website visitors.

Most commercial CAs that you will encounter will be members of the common root CA programs, and will say they are compatible with 99% of browsers, but it does not hurt to check before making your certificate purchase. For example, Apple provides its list of trusted SSL root certificates for iOS8 here.

Certificate Types

Ensure that you choose a CA that offers the certificate type that you require. Many CAs offer variations of these certificate types under a variety of, often confusing, names and pricing structures. Here is a short description of each type:

  • Single Domain: Used for a single domain, e.g. example.com. Note that additional subdomains, such as www.example.com, are not included
  • Wildcard: Used for a domain and any of its subdomains. For example, a wildcard certificate for*.example.com can also be used for www.example.com and store.example.com
  • Multiple Domain: Known as a SAN or UC certificate, these can be used with multiple domains and subdomains that are added to the Subject Alternative Name field. For example, a single multi-domain certificate could be used with example.com, www.example.com, and example.net

In addition to the aforementioned certificate types, there are different levels of validations that CAs offer. We will cover them here:

  • Domain Validation (DV): DV certificates are issued after the CA validates that the requestor owns or controls the domain in question
  • Organization Validation (OV): OV certificates can be issued only after the issuing CA validates the legal identity of the requestor
  • Extended Validation (EV): EV certificates can be issued only after the issuing CA validates the legal identity, among other things, of the requestor, according to a strict set of guidelines. The purpose of this type of certificate is to provide additional assurance of the legitimacy of your organization’s identity to your site’s visitors. EV certificates can be single or multiple domain, but not wildcard

This guide will show you how to obtain a single domain or wildcard SSL certificate from GoDaddy and RapidSSL, but obtaining the other types of certificates is very similar.

Additional Features

Many CAs offer a large variety of “bonus” features to differentiate themselves from the rest of the SSL certificate-issuing vendors. Some of these features can end up saving you money, so it is important that you weigh your needs against the offerings carefully before making a purchase. Example of features to look out for include free certificate reissues or a single domain-priced certificate that works for www. and the domain basename, e.g. www.example.com with a SAN of example.com

Generate a CSR and Private Key

After you have all of your prerequisites sorted out, and you know the type of certificate you want to get, it’s time to generate a certificate signing request (CSR) and private key.

If you are planning on using Apache HTTP or Nginx as your web server, use openssl to generate your private key and CSR on your web server. In this tutorial, we will just keep all of the relevant files in our home directory but feel free to store them in any secure location on your server:

cd ~

To generate a private key, called example.com.key, and a CSR, called example.com.csr, run this command (replace the example.com with the name of your domain):

openssl req -newkey rsa:2048 -nodes -keyout example.com.key -out example.com.csr

At this point, you will be prompted for several lines of information that will be included in your certificate request. The most important part is the Common Name field which should match the name that you want to use your certificate with–for example, example.com, www.example.com, or (for a wildcard certificate request) *.example.com. If you are planning on getting an OV or EV certificate, ensure that all of the other fields accurately reflect your organization or business details.

For example:

Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:New York
Locality Name (eg, city) []:New York
Organization Name (eg, company) [Internet Widgits Pty Ltd]:My Company
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:example.com
Email Address []:sammy@example.com

This will generate a .key and .csr file. The .key file is your private key, and should be kept secure. The .csr file is what you will send to the CA to request your SSL certificate.

You will need to copy and paste your CSR when submitting your certificate request to your CA. To print the contents of your CSR, use this command (replace the filename with your own):

cat example.com.csr

Now we are ready to buy a certificate from a CA. We will show two examples, GoDaddy and RapidSSL via Namecheap, but feel free to get a certificate from any other vendor.

Example CA 1: RapidSSL via Namecheap

Namecheap provides a way to buy SSL certificates from a variety of CAs. We will walk through the process of acquiring a single domain certificate from RapidSSL, but you can deviate if you want a different type of certificate.

Note: If you request a single domain certificate from RapidSSL for the www subdomain of your domain (e.g. www.example.com), they will issue the certificate with a SAN of your base domain. For example, if your certificate request is for www.example.com, the resulting certificate will work for bothwww.example.com and example.com.

Select and Purchase Certificate

Go to Namecheap’s SSL certificate page: https://www.namecheap.com/security/ssl-certificates.aspx.

Here you can start selecting your validation level, certificate type (“Domains Secured”), or CA (“Brand”).

For our example, we will click on the Compare Products button in the “Domain Validation” box. Then we will find “RapidSSL”, and click the Add to Cart button.

At this point, you must register or log in to Namecheap. Then finish the payment process.

Request Certificate

After paying for the certificate of your choice, go to the Manage SSL Certificates link, under the “HiUsername” section.

Namecheap: SSL

Here, you will see a list of all of the SSL certificates that you have purchased through Namecheap. Click on the Activate Now link for the certificate that you want to use.

Namecheap: SSL Management

Now select the software of your web server. This will determine the format of the certificate that Namecheap will deliver to you. Commonly selected options are “Apache + MOD SSL”, “nginx”, or “Tomcat”.

Paste your CSR into the box then click the Next button.

You should now be at the “Select Approver” step in the process, which will send a validation request email to an address in your domain’s WHOIS record or to an administrator type address of the domain that you are getting a certificate for. Select the address that you want to send the validation email to.

Provide the “Administrative Contact Information”. Click the Submit order button.

Validate Domain

At this point, an email will be sent to the “approver” address. Open the email and approve the certificate request.

Download Certificates

After approving the certificate, the certificate will be emailed to the Technical Contact. The certificate issued for your domain and the CA’s intermediate certificate will be at the bottom of the email.

Copy and save them to your server in the same location that you generated your private key and CSR. Name the certificate with the domain name and a .crt extension, e.g. example.com.crt, and name the intermediate certificate intermediate.crt.

The certificate is now ready to be installed on your web server.

Example CA 2: GoDaddy

GoDaddy is a popular CA, and has all of the basic certificate types. We will walk through the process of acquiring a single domain certificate, but you can deviate if you want a different type of certificate.

Select and Purchase Certificate

Go to GoDaddy’s SSL certificate page: https://www.godaddy.com/ssl/ssl-certificates.aspx.

Scroll down and click on the Get Started button.

Go Daddy: Get started

Select the type of SSL certificate that you want from the drop down menu: single domain, multidomain (UCC), or wildcard.

GoDaddy: Certificate Type

Then select your plan type: domain, organization, or extended validation.

Then select the term (duration of validity).

Then click the Add to Cart button.

Review your current order, then click the Proceed to Checkout button.

Complete the registration and payment process.

Request Certificate

After you complete your order, click the SSL Certificates* button (or click on My Account > Manage SSL Certificates in the top-right corner).

Find the SSL certificate that you just purchased and click the Set Up button. If you have not used GoDaddy for SSL certificates before, you will be prompted to set up the “SSL Certificates” product, and associate your recent certificate order with the product (Click the green Set Up button and wait a few minutes before refreshing your browser).

After the “SSL Certificates” Product is added to your GoDaddy account, you should see your “New Certificate” and a “Launch” button. Click on the Launch button next to your new certificate.

Provide your CSR by pasting it into the box. The SHA-2 algorithm will be used by default.

Tick the I agree checkbox, and click the Request Certificate button.

Validate Domain

Now you will have to verify that you have control of the domain, and provide GoDaddy with a few documents. GoDaddy will send a domain ownership verification email to the address that is on your domain’s WHOIS record. Follow the directions in the emails that you are sent to you, and authorize the issuance of the certificate.

Download Certificate

After verifying to GoDaddy that you control the domain, check your email (the one that you registered with GoDaddy with) for a message that says that your SSL certificate has been issued. Open it, and follow the download certificate link (or click the Launch button next to your SSL certificate in the GoDaddy control panel).

Now click the Download button.

Select the server software that you are using from the Server type dropdown menu–if you are using Apache HTTP or Nginx, select “Apache”–then click the Download Zip File button.

Extract the ZIP archive. It should contain two .crt files; your SSL certificate (which should have a random name) and the GoDaddy intermediate certificate bundle (gd_bundle-g2-1.crt). Copy both two your web server. Rename the certificate to the domain name with a .crt extension, e.g. example.com.crt, and rename the intermediate certificate bundle as intermediate.crt.

The certificate is now ready to be installed on your web server.

Install Certificate On Web Server

After acquiring your certificate from the CA of your choice, you must install it on your web server. This involves adding a few SSL-related lines to your web server software configuration.

We will cover basic Nginx and Apache HTTP configurations on Ubuntu 14.04 in this section.

We will assume the following things:

  • The private key, SSL certificate, and, if applicable, the CA’s intermediate certificates are located in a home directory at /home/sammy
  • The private key is called example.com.key
  • The SSL certificate is called example.com.crt
  • The CA intermediate certificate(s) are in a file called intermediate.crt

Note: In a real environment, these files should be stored somewhere that only the user that runs the web server master process (usually root) can access. The private key should be kept secure.

Nginx

If you want to use your certificate with Nginx on Ubuntu 14.04, follow this section.

With Nginx, if your CA included an intermediate certificate, you must create a single “chained” certificate file that contains your certificate and the CA’s intermediate certificates.

Change to the directory that contains your private key, certificate, and the CA intermediate certificates (in the intermediate.crt file). We will assume that they are in your home directory for the example:

cd ~

Assuming your certificate file is called example.com.crt, use this command to create a combined file called example.com.chained.crt (replace the highlighted part with your own domain):

cat example.com.crt intermediate.crt > example.com.chained.crt

Now go to your Nginx server block configuration directory. Assuming that is located at/etc/nginx/sites-enabled, use this command to change to it:

cd /etc/nginx/sites-enabled

Assuming want to add SSL to your default server block file, open the file for editing:

sudo vi default

Find and modify the listen directive, and modify it so it looks like this:

    listen 443 ssl;

Then find the server_name directive, and make sure that its value matches the common name of your certificate. Also, add the ssl_certificate and ssl_certificate_key directives to specify the paths of your certificate and private key files (replace the highlighted part with the actual path of your files):

    server_name example.com;
    ssl_certificate /home/sammy/example.com.chained.crt;
    ssl_certificate_key /home/sammy/example.com.key;

If you want HTTP traffic to redirect to HTTPS, you can add this additional server block at the top of the file (replace the highlighted parts with your own information):

server {
    listen 80;
    server_name example.com;
    rewrite ^/(.*) https://example.com/$1 permanent;
}

Then save and quit.

Now restart Nginx to load the new configuration and enable TLS/SSL over HTTPS!

sudo service nginx restart

Test it out by accessing your site via HTTPS, e.g. https://example.com.

Apache

If want to use your certificate with Apache on Ubuntu 14.04, follow this section. This guide assumes that you want to use HTTPS only. If you follow it exactly, HTTP will not work after completing this.

Assuming your server is running on the default virtual host configuration files, open the for editing:

sudo vi /etc/apache2/sites-available

Find the <VirtualHost *:80> entry and modify it so your web server will listen on port 443:

<VirtualHost *:443>

Then add the ServerName directive (substitute your domain name here):

ServerName example.com

Then add the following lines to specify your certificate and key paths (substitute your actual paths here):

SSLCertificateFile /home/sammy/example.com.crt
SSLCertificateKeyFile /home/sammy/example.com.key

If you are using Apache 2.4.8 or greater, specify the CA intermediate bundle by adding this line (substitute the path):

SSLCACertificateFile /home/sammy/intermediate.crt

If you are using an older version of Apache, specify the CA intermediate bundle with this line (substitute the path):

SSLCertificateChainFile /home/sammy/intermediate.crt

Save and exit.

Enable the Apache SSL module by running this command:

sudo a2enmod ssl

Now restart Apache to load the new configuration and enable TLS/SSL over HTTPS!

sudo service apache2 restart

Test it out by accessing your site via HTTPS, e.g. https://example.com.

Conclusion

Now you should have a good idea of how to add a trusted SSL certificate to secure your web server. Be sure to shop around for a CA that you are happy with!