Partition Tables

Many computer users are familiar with the basic idea of filesystems. A storage device is divided into partitions each formatted to a particular filesystem that holds files. Well, just as the filesystem hold the files, a partition table holds the filesystems. There are a few partition table types. The most commonly known one is MBR.

Master Boot Record (MBR) – Most IBM-based PC storage units use this partition table format. MBR is often referred to as the msdos partition table. The MBR can only support storage devices up to two terabytes. MBR supports the concept of logical and primary partitions. A storage unit with the MBR table can only have up to four primary partitions. Many users wanting to make a multiboot system with more than four Linux distros often have the problem of not being able to support more partitions. Remember, logical partitions cannot be made bootable. Multiboot systems must use a different partition table discussed later.

GUID Partition Table (GPT) – Some IBM-based PC storage units have GPT, although it is usually because the user reformatted from MBR to GPT. However, most Intel-based Mac systems use GPT by default. The GPT partition table offers many improvements over MBR. GPT can support storage units up to over nine zettabytes. GPT is also the most highly recommended partition table for computers needing more than four operating systems on one hard-drive. For example, if a computer with a ten terabyte hard-disk is meant to be a multiboot system for seven different Linux distros, then GPT should be used. Most Unix and Unix-like operating systems can fully support GPT. However, most Windows systems cannot run on a GPT partition table. As for Mac systems, only the Intel-based ones can boot from GPT.

Apple Partition Map (APM) – The PowerPC-based Mac systems can only boot from APM partition tables. This is usually referred to as the Mac or Apple partition table. Linux and Intel-based Macs can use APM. Windows does not support APM.

Amiga rigid disk block (RDB) – Amiga systems use the RDB partition table. These partition tables support up to about 4*10^19TB. That is forty quintillion terabytes.

AIX – The AIX partition table is used by proprietary AIX systems. By default, Linux does not natively support the AIX partition table.

BSD – BSD Unix systems can use the BSD partition table. Linux and Windows cannot read BSD partition tables.

Others – Some other partition table formats are listed below. The below listed are very rarely used. Not much information can be seen on the Internet about them.

dvh
humax
pc98
sgi
sun

Removable Storage – You may be wondering, “Which partition table do flash drives, SD cards, etc. use?”. Well, since all systems can at least read MBR, the majority of mobile/removable storage uses MBR.

Formatting the partition table – To change or reformat a partition table, use Gparted and click “Device > Create Partition Table”. Then, choose the desired partition table. Alternately, Parted can be used to format a storage device with a particular partitioning table (also called a “disk-label”). Doing so will erase all partitions and data on the selected storage device. The command is “parted mklabel DISKLABEL”. The command requires Root privileges. The user will need to create new partitions for the storage device. Supported partitioning tables (supported by parted) include the listed below.
bsd
loop (raw disk access)
gpt
mac (Apple Partition Map (APM))
msdos (commonly called MBR)
pc98
sun

WARNING: Changing the partition table will erase the filesystems, partitions, and files. This is a more “low-level” format. However, the files are not truly gone. Read http://www.linux.org/threads/undelete-files-on-linux-systems.4316/ to fully understand the “deletion” of files.

You may also be wondering which is the best one for you. Well, use MBR with Windows and mobile systems (like Android), APM on PowerPC Macs and iOS, RDB on Amiga, and GPT on all other systems. However, you may have specific reasons for placing an OS on a different partition table than what is recommended one sentence previous.

Upgrading OpenSSH on CentOS

First, download the OpenSSH source tarball from the vendor and unpack it. You can find the tarballs at http://www.openssh.com/portable.html

cd /usr/src

wget http://mirror.team-cymru.org/pub/OpenBSD/OpenSSH/portable/openssh-6.8p1.tar.gz

tar -xvzf openssh-6.8p1.tar.gz

You may need to install a few things for the RPM build to work:

yum install rpm-build gcc make wget openssl-devel krb5-devel pam-devel libX11-devel xmkmf libXt-devel

Copy the spec file and tarball:

mkdir -p /root/rpmbuild/{SOURCES,SPECS}

cp ./openssh-6.8p1/contrib/redhat/openssh.spec /root/rpmbuild/SPECS/

cp openssh-6.8p1.tar.gz /root/rpmbuild/SOURCES/

Do a little magic:

cd /root/rpmbuild/SPECS
sed -i -e "s/%define no_gnome_askpass 0/%define no_gnome_askpass 1/g" /usr/src/redhat/SPECS/openssh.spec
sed -i -e "s/%define no_x11_askpass 0/%define no_x11_askpass 1/g" /usr/src/redhat/SPECS/openssh.spec
sed -i -e "s/BuildPreReq/BuildRequires/g" /usr/src/redhat/SPECS/openssh.spec

…and build your RPM:

rpmbuild -bb openssh.spec

Now if you go back into /root/rpmbuild/RPMS/<arch> , you should see three RPMs. Go ahead and install them:

rpm -Uvh *.rpm

To verify the installed version, just type ‘ssh -v localhost’ and you should see the banner come up, indicating the new version.

*IMPORTANT! You may want to open a new SSH session to your server before exiting, to make sure everything is working! If you have a problem, simply:

yum downgrade openssh-server

Rackspace joins the OpenPower Foundation

Gigaom

Rackspace is now an official member of the OpenPower Foundation, the IBM-created organization whose job is to help oversee IBM’s open-source chips; these chips are posed to give Intel’s x86 chips a run for their money. The cloud provider said in a blog post Tuesday that it will be working with partners to “to design and build an OpenPOWER-based, Open Compute platform” that it eventually aims to put into production. Rackspace now joins Google, Canonical, Nvidia and Samsung as another OpenPower member. In early October, IBM announced a new OpenPower-certified server for webscale-centric companies that comes with an IBM Power8 processor and Nvidia’s GPU accelerator.

View original post

An Introduction to Cloud Hosting

Introduction

Cloud hosting is a method of using online virtual servers that can be created, modified, and destroyed on demand. Cloud servers are allocated resources like CPU cores and memory by the physical server that it’s hosted on and can be configured with a developer’s choice of operating system and accompanying software. Cloud hosting can be used for hosting websites, sending and storing emails, and distributing web-based applications and other services.

In this guide, we will go over some of the basic concepts involved in cloud hosting, including how virtualization works, the components in a virtual environment, and comparisons with other common hosting methods.

What is “the Cloud”?

“The Cloud” is a common term that refers to servers connected to the Internet that are available for public use, either through paid leasing or as part of a software or platform service. A cloud-based service can take many forms, including web hosting, file hosting and sharing, and software distribution. “The Cloud” can also be used to refer to cloud computing, which is the practice of using several servers linked together to share the workload of a task. Instead of running a complex process on a single powerful machine, cloud computing distributes the task across many smaller computers.

Other Hosting Methods

Cloud hosting is just one of many different types of hosting available to customers and developers today, though there are some key differences between them. Traditionally, sites and apps with low budgets and low traffic would use shared hosting, while more demanding workloads would be hosted on dedicated servers.

Shared hosting is the most common and most affordable way to get a small and simple site up and running. In this scenario, hundreds or thousands of sites share a common pool of server resources, like memory and CPU. Shared hosting tends to offer the most basic and inflexible feature and pricing structures, as access to the site’s underlying software is very limited due to the shared nature of the server.

Dedicated hosting is when a physical server machine is sold or leased to a single client. This is more flexible than shared hosting, as a developer has full control over the server’s hardware, operating system, and software configuration. Dedicated servers are common among more demanding applications, such as enterprise software and commercial services like social media, online games, and development platforms.

How Virtualization Works

Cloud hosting environments are broken down into two main parts: the virtual servers that apps and websites can be hosted on and the physical hosts that manage the virtual servers. This virtualization is what is behind the features and advantages of cloud hosting: the relationship between host and virtual server provides flexibility and scaling that are not available through other hosting methods.

Virtual Servers

The most common form of cloud hosting today is the use of a virtual private server, or VPS. A VPS is a virtual server that acts like a real computer with its own operating system. While virtual servers share resources that are allocated to them by the host, their software is well isolated, so operations on one VPS won’t affect the others.

Virtual servers are deployed and managed by the hypervisor of a physical host. Each virtual server has an operating system installed by the hypervisor and available to the user to add software on top of. For many practical purposes, a virtual server is identical in use to a dedicated physical server, though performance may be lower in some cases due to the virtual server sharing physical hardware resources with other servers on the same host.

Hosts

Resources are allocated to a virtual server by the physical server that it is hosted on. This host uses a software layer called a hypervisor to deploy, manage, and grant resources to the virtual servers that are under its control. The term “hypervisor” is often used to refer to the physical hosts that hypervisors (and their virtual servers) are installed on.

The host is in charge of allocating memory, CPU cores, and a network connection to a virtual server when one is launched. An ongoing duty of the hypervisor is to schedule processes between the virtual CPU cores and the physical ones, since multiple virtual servers may be utilizing the same physical cores. The method of choice for process scheduling is one of the key differences between different hypervisors.

Hypervisors

There are a few common hypervisor software available for cloud hosts today. These different virtualization methods have some key differences, but they all provide the tools that a host needs to deploy, maintain, move, and destroy virtual servers as needed.

KVM, short for “Kernel-Based Virtual Machine”, is a virtualization infrastructure that is built in to the Linux kernel. When activated, this kernel module turns the Linux machine into a hypervisor, allowing it to begin hosting virtual servers. This method is in contrast from how other hypervisors usually work, as KVM does not need to create or emulate kernel components that are used for virtual hosting.

Xen is one of the most common hypervisors in use today. Unlike KVM, Xen uses a microkernel, which provides the tools needed to support virtual servers without modifying the host’s kernel. Xen supports two distinct methods of virtualization: paravirtualization, which skips the need to emulate hardware but requires special modifications made to the virtual servers’ operating system, and hardware-assisted virtualization, which uses special hardware features to efficiently emulate a virtual server so that they can use unmodified operating systems.

ESXi is an enterprise-level hypervisor offered by VMware. ESXi is unique in that it doesn’t require the host to have an underlying operating system. This is referred to as a “type 1” hypervisor and is extremely efficient due to the lack of a “middleman” between the hardware and the virtual servers. With type 1 hypervisors like ESXi, no operating system needs to be loaded on the host because the hypervisor itself acts as the operating system.

Hyper-V is one of the most popular methods of virtualizing Windows servers and is available as a system service in Windows Server. This makes Hyper-V a common choice for developers working within a Windows software environment. Hyper-V is included in Windows Server 2008 and 2012 and is also available as a stand-alone server without an existing installation of Windows Server.

Why Cloud Hosting?

The features offered by virtualization lend themselves well to a cloud hosting environment. Virtual servers can be configured with a wide range of hardware resource allocations, and can often have resources added or removed as needs change over time. Some cloud hosts can move a virtual server from one hypervisor to another with little or no downtime or duplicate the server for redundancy in case of a node failure.

Customization

Developers often prefer to work in a VPS due to the control that they have over the virtual environment. Most virtual servers running Linux offer access to the root (administrator) account or sudo privileges by default, giving a developer the ability to install and modify whatever software they need.

This freedom of choice begins with the operating system. Most hypervisors are capable of hosting nearly any guest operating system, from open source software like Linux and BSD to proprietary systems like Windows. From there, developers can begin installing and configuring the building blocks needed for whatever they are working on. A cloud server’s configurations might involve a web server, database, email service, or an app that has been developed and is ready for distribution.

Scalability

Cloud servers are very flexible in their ability to scale. Scaling methods fall into two broad categories: horizontal scaling and vertical scaling. Most hosting methods can scale one way or the other, but cloud hosting is unique in its ability to scale both horizontally and vertically. This is due to the virtual environment that a cloud server is built on: since its resources are an allocated portion of a larger physical pool, it’s easy to adjust these resources or duplicate the virtual image to other hypervisors.

Horizontal scaling, often referred to as “scaling out”, is the process of adding more nodes to a clustered system. This might involve adding more web servers to better manage traffic, adding new servers to a region to reduce latency, or adding more database workers to increase data transfer speed. Many newer web utilities, like CoreOS, Docker, and Couchbase, are built around efficient horizontal scaling.

Vertical scaling, or “scaling up”, is when a single server is upgraded with additional resources. This might be an expansion of available memory, an allocation of more CPU cores, or some other upgrade that increases that server’s capacity. These upgrades usually pave the way for additional software instances, like database workers, to operate on that server. Before horizontal scaling became cost-effective, vertical scaling was the method of choice to respond to increasing demand.

With cloud hosting, developers can scale depending on their application’s needs — they can scale out by deploying additional VPS nodes, scale up by upgrading existing servers, or do both when server needs have dramatically increased.

Conclusion

By now, you should have a decent understanding of how cloud hosting works, including the relationship between hypervisors and the virtual servers that they are responsible for, as well as how cloud hosting compares to other common hosting methods. With this information in mind, you can choose the best hosting for your needs.

New cPanel Database Mapping Feature: Is it for You?

It was announced by cPanel on April 14, 2010 that cPanel 11.25.1 will include a new database mapping feature that’s been long requested: the removal of cPanel username prefixes from the database names. This is a non-reversible, opt-in feature that some hosts may find very valuable. But is it a feature that you need?

Who is this feature for?

  • Hosts migrating entire servers from other control panels like Plesk or Ensim
  • Single-customer environments

Who is this feature not for?

  • Shared hosting providers
  • Larger-scale hosts

The concerns that are initially raised is in regards to shared hosting servers. With the new database mapping feature turned on, if one user takes a database name, no one else on the server can use it. Additionally, you’re creating a conflict if you move that user from one server to another, where the recipient server already has a user with that database name. For these reasons alone, I would not advise this option being enabled for the general shared hosting provider, if the end users are going to be allowed to pick database names.

One of the advantages of cPanel is that you can move accounts between cPanel servers, even those from other hosts. If one host has the new mapping feature enabled, and one doesn’t (or has an older version of cPanel), you’re likely going to have a problem. For hosts with high conversion rates, this can be a deal breaker if the ease of moving cPanel accounts from other hosts isn’t there anymore. This feature also creates a break in the standardization that all cPanel servers inherently have. Most users by now that have already used cPanel know about the current database naming scheme, so enabling this feature without any technical justification can also create confusion among users that are familiar with and have been using cPanel for a long time.

Update: A rep from cPanel added this comment:

As cPanel 11.25.0 builds 46057 and higher, accounts transferred from a cPanel 11.25.1 system will preserve the YAML mapping file. Any databases and database users that lack the old-style prefix will not be manageable in the 11.25.0 cPanel interface, but the information is at least retained for later use (e.g. if a system with such an account is later upgraded to cPanel 11.25.1+ then the pre-existing YAML file will be updated and the databases and user will be manageable in the cPanel interface).

On the other hand, this feature is extremely valuable for hosts converting from other control panels or fulfilling requirements of single-customer environments. Other control panels do not prefix usernames to the database name, so large transfers would be especially painful for a cPanel host that acquires a non-cPanel host. The new mapping feature will help eliminate downtime due to incorrect database connection parameters and the need for mass reconfiguration.

Finally, for hosts that offer VPS and Dedicated hosting to single-customer environments, it’s nice to finally be able to remove the prefix that annoys web developers and IT people in charge of moving their customer sites.

So while this new feature is exciting, it’s opt-in for a reason – and that doesn’t mean it’s right for your hosting setup.

Additional Information:

http://www.cpanel.net/blog/integration/2010/04/a-general-overview-of-database-mapping.html
http://www.cpanel.net/2010/05/backwards-incompatible.html
http://www.cpanel.net/blog/integration/2010/05/more-details-about-db-mapping.html