KernelCare Security Update Issued

An update for KernelCare was just released to address CVE-2014-9322 and your OS should be automatically updated, unless specified otherwise.

Remote backup service

A remote, online, or managed backup service, sometimes marketed as cloud backup, is a service that provides users with a system for the backup, storage, and recovery of computer files. Online backup providersare companies that provide this type of service to end users (or clients). Such backup services are considered a form of cloud computing.

Online backup systems are typically built around a client software program that runs on a schedule, typically once a day, and usually at night while computers aren’t in use. This program typically collects, compresses, encrypts, and transfers the data to the remote backup service provider’s servers or off-site hardware.

Characteristics

Service-based

  1. The assurance, guarantee, or validation that what was backed up is recoverable whenever it is required is critical. Data stored in the service provider’s cloud must undergo regular integrity validation to ensure its recoverability.
  2. Cloud BUR (BackUp & Restore) services need to provide a variety of granularity when it comes to RTO’s (Recovery Time Objective). One size does not fit all either for the customers or the applications within a customer’s environment.
  3. The customer should never have to manage the back end storage repositories in order to back up and recover data.
  4. The interface used by the customer needs to enable the selection of data to protect or recover, the establishment of retention times, destruction dates as well as scheduling.
  5. Cloud backup needs to be an active process where data is collected from systems that store the original copy. This means that cloud backup will not require data to be copied into a specific appliance from where data is collected before being transmitted to and stored in the service provider’s data centre.

Ubiquitous access

  1. Cloud BUR utilizes standard networking protocols (which today are primarily but not exclusively IP based) to transfer data between the customer and the service provider.
  2. Vaults or repositories need to be always available to restore data to any location connected to the Service Provider’s Cloud via private or public networks.

Scalable and elastic

  1. Cloud BUR enables flexible allocation of storage capacity to customers without limit. Storage is allocated on demand and also de-allocated as customers delete backup sets as they age.
  2. Cloud BUR enables a Service Provider to allocate storage capacity to a customer. If that customer later deletes their data or no longer needs that capacity, the Service Provider can then release and reallocate that same capacity to a different customer in an automated fashion.

Metered by use

  1. Cloud Backup allows customers to align the value of data with the cost of protecting it. It is procured on a per-gigabyte per month basis. Prices tend to vary based on the age of data, type of data (email, databases, files etc.), volume, number of backup copies and RTOs.

Shared and secure

  1. The underlying enabling technology for Cloud Backup is a full stack native cloud multitenant platform (shared everything).
  2. Data mobility/portability prevents service provider lock-in and allows customers to move their data from one Service Provider to another, or entirely back into a dedicated Private Cloud (or a Hybrid Cloud).
  3. Security in the cloud is critical. One customer can never have access to another’s data. Additionally, even Service Providers must not be able to access their customer’s data without the customer’s permission.

Enterprise-class cloud backup

An enterprise-class cloud backup solution must include an on-premise cache, to mitigate any issues due to inconsistent Internet connectivity.

Hybrid cloud backup is a backup approach combining Local backup for fast backup and restore, along with Off-site backup for protection against local disasters. According to Liran Eshel, CEO of CTERA Networks, this ensures that the most recent data is available locally in the event of need for recovery, while archived data that is needed much less often is stored in the cloud.

Hybrid cloud backup works by storing data to local disk so that the backup can be captured at high speed, and then either the backup software or a D2D2C (Disk to Disk to Cloud) appliance encrypts and transmits data to a service provider. Recent backups are retained locally, to speed data recovery operations. There are a number of cloud storage appliances on the market that can be used as a backup target, including appliances from CTERA Networks, Nasuni, StorSimple and TwinStrata. Examples of Enterprise-class cloud backup solutions include StoreGrid, Datto, GFI Software’s MAX Backup, and IASO Backup.

Recent improvements in CPU availability allow increased use of software agents instead of hardware appliances for enterprise cloud backup.  The software-only approach can offer advantages including decreased complexity, simple scalability, significant cost savings and improved data recovery times. Examples of no-appliance cloud backup providers include Intronis and Zetta.net.

Typical features

Encryption
Data should be encrypted before it is sent across the internet, and it should be stored in its encrypted state. Encryption should be at least 256 bits, and the user should have the option of using his own encryption key, which should never be sent to the server.
Network backup
A backup service supporting network backup can back up multiple computers, servers or Network Attached Storage appliances on a local area network from a single computer or device.
Continuous backup – Continuous Data Protection
Allows the service to back up continuously or on a predefined schedule. Both methods have advantages and disadvantages. Most backup services are schedule-based and perform backups at a predetermined time. Some services provide continuous data backups which are used by large financial institutions and large online retailers. However, there is typically a trade-off with performance and system resources.
File-by-File Restore
The ability for users to restore files themselves, without the assistance of a Service Provider by allowing the user select files by name and/or folder. Some services allow users to select files by searching for filenames and folder names, by dates, by file type, by backup set, and by tags.
Online access to files
Some services allow you to access backed-up files via a normal web browser. Many services do not provide this type of functionality.
Data compression
Data will typically be compressed with a lossless compression algorithm to minimize the amount of bandwidth used.
Differential data compression
A way to further minimize network traffic is to transfer only the binary data that has changed from one day to the next, similar to the open source file transfer service Rsync.  More advanced online backup services use this method rather than transfer entire files.
Bandwidth usage
User-selectable option to use more or less bandwidth; it may be possible to set this to change at various times of day.
Off-Line Backup
Off-Line Backup allows along with and as part of the online backup solution to cover daily backups in time when network connection is down. At this time the remote backup software must perform backup onto a local media device like a tape drive, a disk or another server. The minute network connection is restored remote backup software will update the remote datacenter with the changes coming out of the off-line backup media .
Synchronization
Many services support data synchronization allowing users to keep a consistent library of all their files across many computers. The technology can help productivity and increase access to data.

Common features for business users

Bulk restore
A way to restore data from a portable storage device when a full restore over the Internet might take too long.
Centralized management console
Allows for an IT department or staff member to monitor backups for the user.
File retention policies
Many businesses require a flexible file retention policy that can be applied to an unlimited number of groups of files called “sets”.
Fully managed services
Some services offer a higher level of support to businesses that might request immediate help, proactive monitoring, personal visits from their service provider, or telephone support.
Redundancy
Multiple copies of data backed up at different locations. This can be achieved by having two or more mirrored data centers, or by keeping a local copy of the latest version of backed up data on site with the business.
Regulatory compliance
Some businesses are required to comply with government regulations that govern privacy, disclosure, and legal discovery. A service provider that offers this type of service assists customers with proper compliance with and understanding of these laws.
Seed loading
Ability to send a first backup on a portable storage device rather than over the Internet when a user has large amounts of data that they need quickly backed up.
Server backup
Many businesses require backups of servers and the special databases that run on them, such as groupware, SQL, and directory services.
Versioning
Keeps multiple past versions of files to allow for rollback to or restoration from a specific point in time.

Cost factors

Online backup services are usually priced as a function of the following things:

  1. The total amount of data being backed up.
  2. The number of machines covered by the backup service.
  3. The maximum number of versions of each file that are kept.
  4. Data retention and archiving period options
  5. Managed backups vs. Unmanaged backups
  6. The level of service and features available

Some vendors limit the number of versions of a file that can be kept in the system. Some services omit this restriction and provide an unlimited number of versions. Add-on features (plug-ins), like the ability to back up currently open or locked files, are usually charged as an extra, but some services provide this built in.

Most remote backup services reduce the amount of data to be sent over the wire by only backing up changed files. This approach to backing up means that the customers total stored data is reduced. Reducing the amount of data sent and also stored can be further drastically reduced by only transmitting the changed data bits by binary or block level incremental backups. Solutions that transmit only these changed binary data bits do not waste bandwidth by transmitting the same file data over and over again if only small amounts change.

Advantages

Remote backup has advantages over traditional backup methods:

  • Perhaps the most important aspect of backing up is that backups are stored in a different location from the original data. Traditional backup requires manually taking the backup media offsite.
  • Remote backup does not require user intervention. The user does not have to change tapes, label CDs or perform other manual steps.
  • Unlimited data retention (presuming the backup provider stays in business).
  • Backups are automatic.
  • Some remote backup services will work continuously, backing up files as they are changed.
  • Most remote backup services will maintain a list of versions of your files.
  • Most remote backup services will use a 128 – 448 bit encryption to send data over unsecured links (i.e. internet)
  • A few remote backup services can reduce backup by only transmitting changed binary data bits

Disadvantages

Remote backup has some disadvantages over traditional backup methods:

  • Depending on the available network bandwidth, the restoration of data can be slow. Because data is stored offsite, the data must be recovered either via the Internet or via a disk shipped from the online backup service provider.
  • Some backup service providers have no guarantee that stored data will be kept private — for example, from employees. As such, most recommend that files be encrypted.
  • It is possible that a remote backup service provider could go out of business or be purchased, which may affect the accessibility of one’s data or the cost to continue using the service.
  • If the encryption password is lost, data recovery will be impossible. However with managed services this should not be a problem.
  • Residential broadband services often have monthly limits that preclude large backups. They are also usually asymmetric; the user-to-network link regularly used to store backups is much slower than the network-to-user link used only when data is restored.
  • In terms of price, when looking at the raw cost of hard disks, remote backups cost about 1-20 times per GB what a local backup would.

Managed vs. unmanaged

Some services provide expert backup management services as part of the overall offering. These services typically include:

  • Assistance configuring the initial backup
  • Continuous monitoring of the backup processes on the client machines to ensure that backups actually happen
  • Proactive alerting in the event that any backups fail
  • Assistance in restoring and recovering data

Disaster recovery

Disaster recovery (DR) involves a set of policies and procedures to enable the recovery or continuation of vital technology infrastructure and systems following a natural or human-induced disaster. Disaster recovery focuses on the IT or technology systems supporting critical business functions, as opposed to business continuity, which involves keeping all essential aspects of a business functioning despite significant disruptive events. Disaster recovery is therefore a subset of business continuity.

Classification of disasters

Disasters can be classified into two broad categories. The first is natural disasters such as floods, hurricanes, tornadoes or earthquakes. While preventing a natural disaster is very difficult, risk management measures such as avoiding disaster-prone situations and good planning can help. The second category is man made disasters, such as hazardous material spills, infrastructure failure, bio-terrorism, and disastrous IT bugs or failed change implementations. In these instances, surveillance, testing and mitigation planning are invaluable.

Importance of disaster recovery planning

Recent research supports the idea that implementing a more holistic pre-disaster planning approach is more cost-effective in the long run. Every $1 spent on hazard mitigation(such as a disaster recovery plan)saves society $4 in response and recovery costs.

As IT systems have become increasingly critical to the smooth operation of a company, and arguably the economy as a whole, the importance of ensuring the continued operation of those systems, and their rapid recovery, has increased. For example, of companies that had a major loss of business data, 43% never reopen and 29% close within two years. As a result, preparation for continuation or recovery of systems needs to be taken very seriously. This involves a significant investment of time and money with the aim of ensuring minimal losses in the event of a disruptive event.

Control measures

Control measures are steps or mechanisms that can reduce or eliminate various threats for organizations. Different types of measures can be included in disaster recovery plan (DRP).

Disaster recovery planning is a subset of a larger process known as business continuity planning and includes planning for resumption of applications, data, hardware, electronic communications (such as networking) and other IT infrastructure. A business continuity plan (BCP) includes planning for non-IT related aspects such as key personnel, facilities, crisis communication and reputation protection, and should refer to the disaster recovery plan (DRP) for IT related infrastructure recovery / continuity.

IT disaster recovery control measures can be classified into the following three types:

  1. Preventive measures – Controls aimed at preventing an event from occurring.
  2. Detective measures – Controls aimed at detecting or discovering unwanted events.
  3. Corrective measures – Controls aimed at correcting or restoring the system after a disaster or an event.

Good disaster recovery plan measures dictate that these three types of controls be documented and exercised regularly using so-called “DR tests”.

Strategies

Prior to selecting a disaster recovery strategy, a disaster recovery planner first refers to their organization’s business continuity plan which should indicate the key metrics of recovery point objective (RPO) and recovery time objective (RTO) for various business processes (such as the process to run payroll, generate an order, etc.). The metrics specified for the business processes are then mapped to the underlying IT systems and infrastructure that support those processes.

Incomplete RTOs and RPOs can quickly derail a disaster recovery plan. Every item in the DR plan requires a defined recovery point and time objective, as failure to create them may lead to significant problems that can extend the disaster’s impact. Once the RTO and RPO metrics have been mapped to IT infrastructure, the DR planner can determine the most suitable recovery strategy for each system. The organization ultimately sets the IT budget and therefore the RTO and RPO metrics need to fit with the available budget. While most business unit heads would like zero data loss and zero time loss, the cost associated with that level of protection may make the desired high availability solutions impractical. A cost-benefit analysis often dictates which disaster recovery measures are implemented.

Some of the most common strategies for data protection include:

  • backups made to tape and sent off-site at regular intervals
  • backups made to disk on-site and automatically copied to off-site disk, or made directly to off-site disk
  • replication of data to an off-site location, which overcomes the need to restore the data (only the systems then need to be restored or synchronized), often making use of storage area network (SAN) technology
  • Hybrid Cloud solutions that replicate both on-site and to off-site data centers. These solutions provide the ability to instantly fail-over to local on-site hardware, but in the event of a physical disaster, servers can be brought up in the cloud data centers as well. Examples include Quorom, rCloud from Persistent Systems or EverSafe.
  • the use of high availability systems which keep both the data and system replicated off-site, enabling continuous access to systems and data, even after a disaster (often associated with cloud storage)

In many cases, an organization may elect to use an outsourced disaster recovery provider to provide a stand-by site and systems rather than using their own remote facilities, increasingly via cloud computing.

In addition to preparing for the need to recover systems, organizations also implement precautionary measures with the objective of preventing a disaster in the first place. These may include:

  • local mirrors of systems and/or data and use of disk protection technology such as RAID
  • surge protectors — to minimize the effect of power surges on delicate electronic equipment
  • use of an uninterruptible power supply (UPS) and/or backup generator to keep systems going in the event of a power failure
  • fire prevention/mitigation systems such as alarms and fire extinguishers
  • anti-virus software and other security measures

Could not chdir to home directory /root: No such file or directory Error and Solution

When I ssh into my server and login as the root user and I’m getting the following error on screen:

Could not chdir to home directory /root: No such file or directory

How do I fix this error under CentOS or Debian Linux server?

The error message is clear. /root home directory does not exists or deleted by you. If you see the following error:

Could not chdir to home directory /home/demo: No such file or directory

It means when you created a user called demo, the home directory /home/demo was not created. To fix this problem create missing directory and apply current permission. To create a directory called /root and set permission, type:
# mkdir /root
# chown root:root /root
# chmod 0700 /root

To create a directory called /home/demo and set permission, type:
# mkdir /home/demo
# chown demo:demo /home/demo
# chmod 0700 /home/demo

Try to login as demo:
# su - demo
Please note that you may need to adjust directory owner, group, and permissions as per your setup.

Finding more information about the user account

To fetch user account entries from administrative database (/etc/passwd and /etc/group), enter:
$ getent passwd demo

Sample outputs:

demo:x:1000:1000:Demo User,,,:/home/demo:/bin/bash

Where,

  1. demo: Login name / username
  2. x : Password: An x character indicates that encrypted password is stored in /etc/shadow file.
  3. 1000: User ID (UID)
  4. 1000: The primary group ID (stored in /etc/group file)
  5. Demo User: The comment field. It allow you to add extra information about the users such as user’s full name, phone number etc. This field use by finger command.
  6. /home/demo: Home directory
  7. /bin/bash: The absolute path of a command or shell (/bin/bash)

$ getent group demo

Sample outputs:

demo:x:1000:

How To Install an SSL Certificate from a Commercial Certificate Authority

Introduction

This tutorial will show you how to acquire and install an SSL certificate from a trusted, commercial Certificate Authority (CA). SSL certificates allow web servers to encrypt their traffic, and also offer a mechanism to validate server identities to their visitors. The main benefit of using a purchased SSL certificate from a trusted CA, over self-signed certificates, is that your site’s visitors will not be presented with a scary warning about not being able to verify your site’s identity.

This tutorial covers how to acquire an SSL certificate from the following trusted certificate authorities:

  • GoDaddy
  • RapidSSL (via Namecheap)

You may also use any other CA of your choice.

After you have acquired your SSL certificate, we will show you how to install it on Nginx and Apache HTTP web servers.

Prerequisites

There are several prerequisites that you should ensure before attempting to obtain an SSL certificate from a commercial CA. This section will cover what you will need in order to be issued an SSL certificate from most CAs.

Money

SSL certificates that are issued from commercial CAs have to be purchased. Free alternatives include self-signed or StartSSL certificates. Self-signed certificates are not trusted by any software and free StartSSL certificates are not trusted by some browsers.

Registered Domain Name

Before acquiring an SSL certificate, you must own or control the registered domain name that you wish to use the certificate with. If you do not already have a registered domain name, you may register one with one of the many domain name registrars out there (e.g. Namecheap, GoDaddy, etc.).

Domain Validation Rights

For the basic domain validation process, you must have access to one of the email addresses on your domain’s WHOIS record or to an “admin type” email address at the domain itself. Certificate authorities that issue SSL certificates will typically validate domain control by sending a validation email to one of the addresses on the domain’s WHOIS record, or to a generic admin email address at the domain itself. Some CAs provide alternative domain validation methods, such as DNS- or HTTP-based validation, which are outside the scope of this guide.

If you wish to be issued an Organization Validation (OV) or Extended Validation (EV) SSL certificate, you will also be required to provide the CA with paperwork to establish the legal identity of the website’s owner, among other things.

Web Server

In addition to the previously mentioned points, you will need a web server to install the SSL certificate on. This is the server that is reachable at the domain name for which the SSL certificate will be issued for. Typically, this will be an Apache HTTP, Nginx, HAProxy, or Varnish server. If you need help setting up a web server that is accessible via your registered domain name, follow these steps:

  1. Set up a web server of your choice. For example, a LEMP (Nginx) or LAMP (Apache) server–be sure to configure the web server software to use the name of your registered domain
  2. Configure your domain to use the appropriate nameservers.
  3. Add DNS records for your web server to your nameservers.

Choose Your Certificate Authority

If you are not sure of which Certificate Authority you are going to use, there are a few important factors to consider. At an overview level, the most important thing is that the CA you choose provides the features you want at a price that you are comfortable with. This section will focus more on the features that most SSL certificate buyers should be aware of, rather than prices.

Root Certificate Program Memberships

The most crucial point is that the CA that you choose is a member of the root certificate programs of the most commonly used operating systems and web browsers, i.e. it is a “trusted” CA, and its root certificate is trusted by common browsers and other software. If your website’s SSL certificate is signed by a trusted” CA, its identity is considered to be valid by software that trusts the CA–this is in contrast to self-signed SSL certificates, which also provide encryption capabilities but are accompanied by identity validation warnings that are off-putting to most website visitors.

Most commercial CAs that you will encounter will be members of the common root CA programs, and will say they are compatible with 99% of browsers, but it does not hurt to check before making your certificate purchase. For example, Apple provides its list of trusted SSL root certificates for iOS8 here.

Certificate Types

Ensure that you choose a CA that offers the certificate type that you require. Many CAs offer variations of these certificate types under a variety of, often confusing, names and pricing structures. Here is a short description of each type:

  • Single Domain: Used for a single domain, e.g. example.com. Note that additional subdomains, such as http://www.example.com, are not included
  • Wildcard: Used for a domain and any of its subdomains. For example, a wildcard certificate for*.example.com can also be used for http://www.example.com and store.example.com
  • Multiple Domain: Known as a SAN or UC certificate, these can be used with multiple domains and subdomains that are added to the Subject Alternative Name field. For example, a single multi-domain certificate could be used with example.com, http://www.example.com, and example.net

In addition to the aforementioned certificate types, there are different levels of validations that CAs offer. We will cover them here:

  • Domain Validation (DV): DV certificates are issued after the CA validates that the requestor owns or controls the domain in question
  • Organization Validation (OV): OV certificates can be issued only after the issuing CA validates the legal identity of the requestor
  • Extended Validation (EV): EV certificates can be issued only after the issuing CA validates the legal identity, among other things, of the requestor, according to a strict set of guidelines. The purpose of this type of certificate is to provide additional assurance of the legitimacy of your organization’s identity to your site’s visitors. EV certificates can be single or multiple domain, but not wildcard

This guide will show you how to obtain a single domain or wildcard SSL certificate from GoDaddy and RapidSSL, but obtaining the other types of certificates is very similar.

Additional Features

Many CAs offer a large variety of “bonus” features to differentiate themselves from the rest of the SSL certificate-issuing vendors. Some of these features can end up saving you money, so it is important that you weigh your needs against the offerings carefully before making a purchase. Example of features to look out for include free certificate reissues or a single domain-priced certificate that works for www. and the domain basename, e.g. http://www.example.com with a SAN of example.com

Generate a CSR and Private Key

After you have all of your prerequisites sorted out, and you know the type of certificate you want to get, it’s time to generate a certificate signing request (CSR) and private key.

If you are planning on using Apache HTTP or Nginx as your web server, use openssl to generate your private key and CSR on your web server. In this tutorial, we will just keep all of the relevant files in our home directory but feel free to store them in any secure location on your server:

cd ~

To generate a private key, called example.com.key, and a CSR, called example.com.csr, run this command (replace the example.com with the name of your domain):

openssl req -newkey rsa:2048 -nodes -keyout example.com.key -out example.com.csr

At this point, you will be prompted for several lines of information that will be included in your certificate request. The most important part is the Common Name field which should match the name that you want to use your certificate with–for example, example.com, http://www.example.com, or (for a wildcard certificate request) *.example.com. If you are planning on getting an OV or EV certificate, ensure that all of the other fields accurately reflect your organization or business details.

For example:

Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:New York
Locality Name (eg, city) []:New York
Organization Name (eg, company) [Internet Widgits Pty Ltd]:My Company
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:example.com
Email Address []:sammy@example.com

This will generate a .key and .csr file. The .key file is your private key, and should be kept secure. The .csr file is what you will send to the CA to request your SSL certificate.

You will need to copy and paste your CSR when submitting your certificate request to your CA. To print the contents of your CSR, use this command (replace the filename with your own):

cat example.com.csr

Now we are ready to buy a certificate from a CA. We will show two examples, GoDaddy and RapidSSL via Namecheap, but feel free to get a certificate from any other vendor.

Example CA 1: RapidSSL via Namecheap

Namecheap provides a way to buy SSL certificates from a variety of CAs. We will walk through the process of acquiring a single domain certificate from RapidSSL, but you can deviate if you want a different type of certificate.

Note: If you request a single domain certificate from RapidSSL for the www subdomain of your domain (e.g. http://www.example.com), they will issue the certificate with a SAN of your base domain. For example, if your certificate request is for http://www.example.com, the resulting certificate will work for bothhttp://www.example.com and example.com.

Select and Purchase Certificate

Go to Namecheap’s SSL certificate page: https://www.namecheap.com/security/ssl-certificates.aspx.

Here you can start selecting your validation level, certificate type (“Domains Secured”), or CA (“Brand”).

For our example, we will click on the Compare Products button in the “Domain Validation” box. Then we will find “RapidSSL”, and click the Add to Cart button.

At this point, you must register or log in to Namecheap. Then finish the payment process.

Request Certificate

After paying for the certificate of your choice, go to the Manage SSL Certificates link, under the “HiUsername” section.

Namecheap: SSL

Here, you will see a list of all of the SSL certificates that you have purchased through Namecheap. Click on the Activate Now link for the certificate that you want to use.

Namecheap: SSL Management

Now select the software of your web server. This will determine the format of the certificate that Namecheap will deliver to you. Commonly selected options are “Apache + MOD SSL”, “nginx”, or “Tomcat”.

Paste your CSR into the box then click the Next button.

You should now be at the “Select Approver” step in the process, which will send a validation request email to an address in your domain’s WHOIS record or to an administrator type address of the domain that you are getting a certificate for. Select the address that you want to send the validation email to.

Provide the “Administrative Contact Information”. Click the Submit order button.

Validate Domain

At this point, an email will be sent to the “approver” address. Open the email and approve the certificate request.

Download Certificates

After approving the certificate, the certificate will be emailed to the Technical Contact. The certificate issued for your domain and the CA’s intermediate certificate will be at the bottom of the email.

Copy and save them to your server in the same location that you generated your private key and CSR. Name the certificate with the domain name and a .crt extension, e.g. example.com.crt, and name the intermediate certificate intermediate.crt.

The certificate is now ready to be installed on your web server.

Example CA 2: GoDaddy

GoDaddy is a popular CA, and has all of the basic certificate types. We will walk through the process of acquiring a single domain certificate, but you can deviate if you want a different type of certificate.

Select and Purchase Certificate

Go to GoDaddy’s SSL certificate page: https://www.godaddy.com/ssl/ssl-certificates.aspx.

Scroll down and click on the Get Started button.

Go Daddy: Get started

Select the type of SSL certificate that you want from the drop down menu: single domain, multidomain (UCC), or wildcard.

GoDaddy: Certificate Type

Then select your plan type: domain, organization, or extended validation.

Then select the term (duration of validity).

Then click the Add to Cart button.

Review your current order, then click the Proceed to Checkout button.

Complete the registration and payment process.

Request Certificate

After you complete your order, click the SSL Certificates* button (or click on My Account > Manage SSL Certificates in the top-right corner).

Find the SSL certificate that you just purchased and click the Set Up button. If you have not used GoDaddy for SSL certificates before, you will be prompted to set up the “SSL Certificates” product, and associate your recent certificate order with the product (Click the green Set Up button and wait a few minutes before refreshing your browser).

After the “SSL Certificates” Product is added to your GoDaddy account, you should see your “New Certificate” and a “Launch” button. Click on the Launch button next to your new certificate.

Provide your CSR by pasting it into the box. The SHA-2 algorithm will be used by default.

Tick the I agree checkbox, and click the Request Certificate button.

Validate Domain

Now you will have to verify that you have control of the domain, and provide GoDaddy with a few documents. GoDaddy will send a domain ownership verification email to the address that is on your domain’s WHOIS record. Follow the directions in the emails that you are sent to you, and authorize the issuance of the certificate.

Download Certificate

After verifying to GoDaddy that you control the domain, check your email (the one that you registered with GoDaddy with) for a message that says that your SSL certificate has been issued. Open it, and follow the download certificate link (or click the Launch button next to your SSL certificate in the GoDaddy control panel).

Now click the Download button.

Select the server software that you are using from the Server type dropdown menu–if you are using Apache HTTP or Nginx, select “Apache”–then click the Download Zip File button.

Extract the ZIP archive. It should contain two .crt files; your SSL certificate (which should have a random name) and the GoDaddy intermediate certificate bundle (gd_bundle-g2-1.crt). Copy both two your web server. Rename the certificate to the domain name with a .crt extension, e.g. example.com.crt, and rename the intermediate certificate bundle as intermediate.crt.

The certificate is now ready to be installed on your web server.

Install Certificate On Web Server

After acquiring your certificate from the CA of your choice, you must install it on your web server. This involves adding a few SSL-related lines to your web server software configuration.

We will cover basic Nginx and Apache HTTP configurations on Ubuntu 14.04 in this section.

We will assume the following things:

  • The private key, SSL certificate, and, if applicable, the CA’s intermediate certificates are located in a home directory at /home/sammy
  • The private key is called example.com.key
  • The SSL certificate is called example.com.crt
  • The CA intermediate certificate(s) are in a file called intermediate.crt

Note: In a real environment, these files should be stored somewhere that only the user that runs the web server master process (usually root) can access. The private key should be kept secure.

Nginx

If you want to use your certificate with Nginx on Ubuntu 14.04, follow this section.

With Nginx, if your CA included an intermediate certificate, you must create a single “chained” certificate file that contains your certificate and the CA’s intermediate certificates.

Change to the directory that contains your private key, certificate, and the CA intermediate certificates (in the intermediate.crt file). We will assume that they are in your home directory for the example:

cd ~

Assuming your certificate file is called example.com.crt, use this command to create a combined file called example.com.chained.crt (replace the highlighted part with your own domain):

cat example.com.crt intermediate.crt > example.com.chained.crt

Now go to your Nginx server block configuration directory. Assuming that is located at/etc/nginx/sites-enabled, use this command to change to it:

cd /etc/nginx/sites-enabled

Assuming want to add SSL to your default server block file, open the file for editing:

sudo vi default

Find and modify the listen directive, and modify it so it looks like this:

    listen 443 ssl;

Then find the server_name directive, and make sure that its value matches the common name of your certificate. Also, add the ssl_certificate and ssl_certificate_key directives to specify the paths of your certificate and private key files (replace the highlighted part with the actual path of your files):

    server_name example.com;
    ssl_certificate /home/sammy/example.com.chained.crt;
    ssl_certificate_key /home/sammy/example.com.key;

If you want HTTP traffic to redirect to HTTPS, you can add this additional server block at the top of the file (replace the highlighted parts with your own information):

server {
    listen 80;
    server_name example.com;
    rewrite ^/(.*) https://example.com/$1 permanent;
}

Then save and quit.

Now restart Nginx to load the new configuration and enable TLS/SSL over HTTPS!

sudo service nginx restart

Test it out by accessing your site via HTTPS, e.g. https://example.com.

Apache

If want to use your certificate with Apache on Ubuntu 14.04, follow this section. This guide assumes that you want to use HTTPS only. If you follow it exactly, HTTP will not work after completing this.

Assuming your server is running on the default virtual host configuration files, open the for editing:

sudo vi /etc/apache2/sites-available

Find the <VirtualHost *:80> entry and modify it so your web server will listen on port 443:

<VirtualHost *:443>

Then add the ServerName directive (substitute your domain name here):

ServerName example.com

Then add the following lines to specify your certificate and key paths (substitute your actual paths here):

SSLCertificateFile /home/sammy/example.com.crt
SSLCertificateKeyFile /home/sammy/example.com.key

If you are using Apache 2.4.8 or greater, specify the CA intermediate bundle by adding this line (substitute the path):

SSLCACertificateFile /home/sammy/intermediate.crt

If you are using an older version of Apache, specify the CA intermediate bundle with this line (substitute the path):

SSLCertificateChainFile /home/sammy/intermediate.crt

Save and exit.

Enable the Apache SSL module by running this command:

sudo a2enmod ssl

Now restart Apache to load the new configuration and enable TLS/SSL over HTTPS!

sudo service apache2 restart

Test it out by accessing your site via HTTPS, e.g. https://example.com.

Conclusion

Now you should have a good idea of how to add a trusted SSL certificate to secure your web server. Be sure to shop around for a CA that you are happy with!

Node.js

About Node.js®#

As an asynchronous event driven framework, Node.js is designed to build scalable network applications. In the following “hello world” example, many connections can be handled concurrently. Upon each connection the callback is fired, but if there is no work to be done Node is sleeping.

var http = require('http');

http.createServer(function (req, res) {
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('Hello World\n');
}).listen(1337, "127.0.0.1");

console.log('Server running at http://127.0.0.1:1337/');

This is in contrast to today’s more common concurrency model where OS threads are employed. Thread-based networking is relatively inefficient and very difficult to use. Furthermore, users of Node are free from worries of dead-locking the process—there are no locks. Almost no function in Node directly performs I/O, so the process never blocks. Because nothing blocks, less-than-expert programmers are able to develop scalable systems.

Node is similar in design to and influenced by systems like Ruby’s Event Machine or Python’s Twisted. Node takes the event model a bit further, it presents the event loop as a language construct instead of as a library. In other systems there is always a blocking call to start the event-loop. Typically one defines behavior through callbacks at the beginning of a script and at the end starts a server through a blocking call like EventMachine::run(). In Node there is no such start-the-event-loop call. Node simply enters the event loop after executing the input script. Node exits the event loop when there are no more callbacks to perform. This behavior is like browser JavaScript -— the event loop is hidden from the user.

HTTP is a first class citizen in Node, designed with streaming and low latency in mind. This makes Node well suited for the foundation of web library or framework.

Just because Node is designed without threads, doesn’t mean you cannot take advantage of multiple cores in your environment. You can spawn child processes that are easy to communicate with by using our child_process.fork() API. Built upon that same interface is the cluster module, which allows you to share sockets between processes to enable load balancing over your cores.