Which Distro is Right for Me?

Debian Debian Linux will be well-suited for those who need stability. Debian Linux uses older software that is known to be stable. Generally, hospitals that use Linux will use Debian on important systems. Debian is usually a wise choice for a server system because the software is usually stable. The recommended system requirements are 1GHz processor, 512MB memory, 5GB hard-drive. http://www.debian.org/distrib/

Ubuntu For those that like Debian, but want the latest software and an interface with better graphics, Ubuntu is a common choice. Ubuntu is stable, but many Linux users recommend Debian for critical systems. The average mainstream desktop/laptop user will probably want Ubuntu. The recommended system requirements are 800MB memory, 1GHz processor, and 5GB hard-drive. http://www.ubuntu.com/download

Kubuntu Same as Ubuntu, but uses KDE. Users that dislike Unity may prefer Kubuntu. The recommended system requirements are 1GHz processor, 10GB hard-drive, and more than 1GB memory. http://www.kubuntu.org/getkubuntu

Xubuntu Xubuntu is a lightweight Ubuntu system for older hardware or hardware with less resources. Xubuntu uses the XFCE interface instead of Unity. The recommended system requirements are 512MB memory and 5GB hard-drive (tyr Lubuntu for something more lightweight). http://xubuntu.org/getxubuntu/
Linux Mint: For people that want a Debian-based system, but dislike Unity may be interested in Linux Mint. Linux Mint may come with the MATE, Cinnamon, XFCE, or KDE interface (user’s choice). The recommended system requirements are 1GHz processor, 1GB memory, and 10GB hard-drive. http://www.linuxmint.com/download.php

BackTrack (Kali) This is a Ubuntu-based high-security system. I would recommend this for Anonymous. BackTrack (now called Kali) is often used for hacking into other systems. Although, that is illegal unless you are hacking into a computer of your because you forgot the password. BackTrack/Kali is also used to evaluate security. Some companies may use BackTrack/Kali to find security flaws in their own system. http://www.kali.org/downloads/

Slackware Slackware is a simple lightweight system. Usually, Slackware is preferred among advanced users due to Slackware being less of a user-friendly system compared to other distros. The recommended system requirements are i486 processor, 256MB memory, and 5GB hard-drive. Advanced users wanting a lightweight system may prefer Slackware. http://www.slackware.com/

Arch Arch Linux is a minimalistic system that is supposedly very simple. It is also a lightweight system that is used among advanced Linux users. Advanced users that dislike Slackware may like Arch. https://www.archlinux.org/download/

Fedora Some Linux users may say Fedora is the RedHat counterpart of Ubuntu (Debian system). Fedora is perfect for many mainstream desktop/laptop users. Fedora handles graphics well and uses appealing interfaces. The recommended system requirements are 1GB memory and 10GB hard-drive. http://fedoraproject.org/en/get-fedora

Red Hat Enterprise Linux RedHat is usually used as a server system. Fedora is the client/desktop system while RedHat is the server “version”. So, if you would like to use Fedora as a server or need a system that is more stable than Fedora, then use RedHat.

Puppy Linux This is a very lightweight system that is usually used on older systems due to the light requirements. Puppy Linux may not have the best-looking interface, but it is still easy to use. The recommended system requirements are 333MHz processor, 64MB memory, 512MB swap, and 1GB hard-drive. http://puppylinux.org/main/Download Latest Release.htm

AnitaOS This is a form of Puppy Linux developed by @Darren Hale intended for old hardware. AnitaOS uses old kernels while the mainstream Puppy Linux uses the newer kernels. http://sourceforge.net/projects/anitaos/ | http://www.linux.org/threads/anitaos-a-diy-distro-you-build-it-yourself.4401/

Damn Small Linux (DSL) This is a lightweight Linux system that requires 8MB of memory and at least an i486 processor. People needing a lightweight system may want DSL if they dislike Puppy Linux. http://www.damnsmalllinux.org/download.html

CentOS CentOS is often comparable to Linux Mint, but CentOS is Red-Hat-based instead of Debian-based. In fact, CentOS is RHEL without the branding. Basically, if you want RHEL, but do not want to pay for it and support, then get CentOS. People who like Linux Mint, but want a Red-Hat system may be interested in CentOS. The recommended system requirements are 256MB memory and 256MB hard-drive. http://www.centos.org/modules/tinycontent/index.php?id=30

OpenSUSE OpenSUSE is a RedHat-based distro that has YaST and ZYpp. OpenSUSE is available as a rolling release or a stable version-by-version basis. The minimum requirements include 2GB memory, 5GB hard-space, AMD64 or Intel 2.4GHz. http://www.opensuse.org

If a distro containing no closed-source software anywhere in the system is needed, then check out GNU.org’s list of 100% open-source GNU/Linux operating systems – https://www.gnu.org/distros/free-distros.en.html

Advertisements

KernelCare Security Update Issued

An update for KernelCare was just released to address CVE-2014-9322 and your OS should be automatically updated, unless specified otherwise.

Remote backup service

A remote, online, or managed backup service, sometimes marketed as cloud backup, is a service that provides users with a system for the backup, storage, and recovery of computer files. Online backup providersare companies that provide this type of service to end users (or clients). Such backup services are considered a form of cloud computing.

Online backup systems are typically built around a client software program that runs on a schedule, typically once a day, and usually at night while computers aren’t in use. This program typically collects, compresses, encrypts, and transfers the data to the remote backup service provider’s servers or off-site hardware.

Characteristics

Service-based

  1. The assurance, guarantee, or validation that what was backed up is recoverable whenever it is required is critical. Data stored in the service provider’s cloud must undergo regular integrity validation to ensure its recoverability.
  2. Cloud BUR (BackUp & Restore) services need to provide a variety of granularity when it comes to RTO’s (Recovery Time Objective). One size does not fit all either for the customers or the applications within a customer’s environment.
  3. The customer should never have to manage the back end storage repositories in order to back up and recover data.
  4. The interface used by the customer needs to enable the selection of data to protect or recover, the establishment of retention times, destruction dates as well as scheduling.
  5. Cloud backup needs to be an active process where data is collected from systems that store the original copy. This means that cloud backup will not require data to be copied into a specific appliance from where data is collected before being transmitted to and stored in the service provider’s data centre.

Ubiquitous access

  1. Cloud BUR utilizes standard networking protocols (which today are primarily but not exclusively IP based) to transfer data between the customer and the service provider.
  2. Vaults or repositories need to be always available to restore data to any location connected to the Service Provider’s Cloud via private or public networks.

Scalable and elastic

  1. Cloud BUR enables flexible allocation of storage capacity to customers without limit. Storage is allocated on demand and also de-allocated as customers delete backup sets as they age.
  2. Cloud BUR enables a Service Provider to allocate storage capacity to a customer. If that customer later deletes their data or no longer needs that capacity, the Service Provider can then release and reallocate that same capacity to a different customer in an automated fashion.

Metered by use

  1. Cloud Backup allows customers to align the value of data with the cost of protecting it. It is procured on a per-gigabyte per month basis. Prices tend to vary based on the age of data, type of data (email, databases, files etc.), volume, number of backup copies and RTOs.

Shared and secure

  1. The underlying enabling technology for Cloud Backup is a full stack native cloud multitenant platform (shared everything).
  2. Data mobility/portability prevents service provider lock-in and allows customers to move their data from one Service Provider to another, or entirely back into a dedicated Private Cloud (or a Hybrid Cloud).
  3. Security in the cloud is critical. One customer can never have access to another’s data. Additionally, even Service Providers must not be able to access their customer’s data without the customer’s permission.

Enterprise-class cloud backup

An enterprise-class cloud backup solution must include an on-premise cache, to mitigate any issues due to inconsistent Internet connectivity.

Hybrid cloud backup is a backup approach combining Local backup for fast backup and restore, along with Off-site backup for protection against local disasters. According to Liran Eshel, CEO of CTERA Networks, this ensures that the most recent data is available locally in the event of need for recovery, while archived data that is needed much less often is stored in the cloud.

Hybrid cloud backup works by storing data to local disk so that the backup can be captured at high speed, and then either the backup software or a D2D2C (Disk to Disk to Cloud) appliance encrypts and transmits data to a service provider. Recent backups are retained locally, to speed data recovery operations. There are a number of cloud storage appliances on the market that can be used as a backup target, including appliances from CTERA Networks, Nasuni, StorSimple and TwinStrata. Examples of Enterprise-class cloud backup solutions include StoreGrid, Datto, GFI Software’s MAX Backup, and IASO Backup.

Recent improvements in CPU availability allow increased use of software agents instead of hardware appliances for enterprise cloud backup.  The software-only approach can offer advantages including decreased complexity, simple scalability, significant cost savings and improved data recovery times. Examples of no-appliance cloud backup providers include Intronis and Zetta.net.

Typical features

Encryption
Data should be encrypted before it is sent across the internet, and it should be stored in its encrypted state. Encryption should be at least 256 bits, and the user should have the option of using his own encryption key, which should never be sent to the server.
Network backup
A backup service supporting network backup can back up multiple computers, servers or Network Attached Storage appliances on a local area network from a single computer or device.
Continuous backup – Continuous Data Protection
Allows the service to back up continuously or on a predefined schedule. Both methods have advantages and disadvantages. Most backup services are schedule-based and perform backups at a predetermined time. Some services provide continuous data backups which are used by large financial institutions and large online retailers. However, there is typically a trade-off with performance and system resources.
File-by-File Restore
The ability for users to restore files themselves, without the assistance of a Service Provider by allowing the user select files by name and/or folder. Some services allow users to select files by searching for filenames and folder names, by dates, by file type, by backup set, and by tags.
Online access to files
Some services allow you to access backed-up files via a normal web browser. Many services do not provide this type of functionality.
Data compression
Data will typically be compressed with a lossless compression algorithm to minimize the amount of bandwidth used.
Differential data compression
A way to further minimize network traffic is to transfer only the binary data that has changed from one day to the next, similar to the open source file transfer service Rsync.  More advanced online backup services use this method rather than transfer entire files.
Bandwidth usage
User-selectable option to use more or less bandwidth; it may be possible to set this to change at various times of day.
Off-Line Backup
Off-Line Backup allows along with and as part of the online backup solution to cover daily backups in time when network connection is down. At this time the remote backup software must perform backup onto a local media device like a tape drive, a disk or another server. The minute network connection is restored remote backup software will update the remote datacenter with the changes coming out of the off-line backup media .
Synchronization
Many services support data synchronization allowing users to keep a consistent library of all their files across many computers. The technology can help productivity and increase access to data.

Common features for business users

Bulk restore
A way to restore data from a portable storage device when a full restore over the Internet might take too long.
Centralized management console
Allows for an IT department or staff member to monitor backups for the user.
File retention policies
Many businesses require a flexible file retention policy that can be applied to an unlimited number of groups of files called “sets”.
Fully managed services
Some services offer a higher level of support to businesses that might request immediate help, proactive monitoring, personal visits from their service provider, or telephone support.
Redundancy
Multiple copies of data backed up at different locations. This can be achieved by having two or more mirrored data centers, or by keeping a local copy of the latest version of backed up data on site with the business.
Regulatory compliance
Some businesses are required to comply with government regulations that govern privacy, disclosure, and legal discovery. A service provider that offers this type of service assists customers with proper compliance with and understanding of these laws.
Seed loading
Ability to send a first backup on a portable storage device rather than over the Internet when a user has large amounts of data that they need quickly backed up.
Server backup
Many businesses require backups of servers and the special databases that run on them, such as groupware, SQL, and directory services.
Versioning
Keeps multiple past versions of files to allow for rollback to or restoration from a specific point in time.

Cost factors

Online backup services are usually priced as a function of the following things:

  1. The total amount of data being backed up.
  2. The number of machines covered by the backup service.
  3. The maximum number of versions of each file that are kept.
  4. Data retention and archiving period options
  5. Managed backups vs. Unmanaged backups
  6. The level of service and features available

Some vendors limit the number of versions of a file that can be kept in the system. Some services omit this restriction and provide an unlimited number of versions. Add-on features (plug-ins), like the ability to back up currently open or locked files, are usually charged as an extra, but some services provide this built in.

Most remote backup services reduce the amount of data to be sent over the wire by only backing up changed files. This approach to backing up means that the customers total stored data is reduced. Reducing the amount of data sent and also stored can be further drastically reduced by only transmitting the changed data bits by binary or block level incremental backups. Solutions that transmit only these changed binary data bits do not waste bandwidth by transmitting the same file data over and over again if only small amounts change.

Advantages

Remote backup has advantages over traditional backup methods:

  • Perhaps the most important aspect of backing up is that backups are stored in a different location from the original data. Traditional backup requires manually taking the backup media offsite.
  • Remote backup does not require user intervention. The user does not have to change tapes, label CDs or perform other manual steps.
  • Unlimited data retention (presuming the backup provider stays in business).
  • Backups are automatic.
  • Some remote backup services will work continuously, backing up files as they are changed.
  • Most remote backup services will maintain a list of versions of your files.
  • Most remote backup services will use a 128 – 448 bit encryption to send data over unsecured links (i.e. internet)
  • A few remote backup services can reduce backup by only transmitting changed binary data bits

Disadvantages

Remote backup has some disadvantages over traditional backup methods:

  • Depending on the available network bandwidth, the restoration of data can be slow. Because data is stored offsite, the data must be recovered either via the Internet or via a disk shipped from the online backup service provider.
  • Some backup service providers have no guarantee that stored data will be kept private — for example, from employees. As such, most recommend that files be encrypted.
  • It is possible that a remote backup service provider could go out of business or be purchased, which may affect the accessibility of one’s data or the cost to continue using the service.
  • If the encryption password is lost, data recovery will be impossible. However with managed services this should not be a problem.
  • Residential broadband services often have monthly limits that preclude large backups. They are also usually asymmetric; the user-to-network link regularly used to store backups is much slower than the network-to-user link used only when data is restored.
  • In terms of price, when looking at the raw cost of hard disks, remote backups cost about 1-20 times per GB what a local backup would.

Managed vs. unmanaged

Some services provide expert backup management services as part of the overall offering. These services typically include:

  • Assistance configuring the initial backup
  • Continuous monitoring of the backup processes on the client machines to ensure that backups actually happen
  • Proactive alerting in the event that any backups fail
  • Assistance in restoring and recovering data

Disaster recovery

Disaster recovery (DR) involves a set of policies and procedures to enable the recovery or continuation of vital technology infrastructure and systems following a natural or human-induced disaster. Disaster recovery focuses on the IT or technology systems supporting critical business functions, as opposed to business continuity, which involves keeping all essential aspects of a business functioning despite significant disruptive events. Disaster recovery is therefore a subset of business continuity.

Classification of disasters

Disasters can be classified into two broad categories. The first is natural disasters such as floods, hurricanes, tornadoes or earthquakes. While preventing a natural disaster is very difficult, risk management measures such as avoiding disaster-prone situations and good planning can help. The second category is man made disasters, such as hazardous material spills, infrastructure failure, bio-terrorism, and disastrous IT bugs or failed change implementations. In these instances, surveillance, testing and mitigation planning are invaluable.

Importance of disaster recovery planning

Recent research supports the idea that implementing a more holistic pre-disaster planning approach is more cost-effective in the long run. Every $1 spent on hazard mitigation(such as a disaster recovery plan)saves society $4 in response and recovery costs.

As IT systems have become increasingly critical to the smooth operation of a company, and arguably the economy as a whole, the importance of ensuring the continued operation of those systems, and their rapid recovery, has increased. For example, of companies that had a major loss of business data, 43% never reopen and 29% close within two years. As a result, preparation for continuation or recovery of systems needs to be taken very seriously. This involves a significant investment of time and money with the aim of ensuring minimal losses in the event of a disruptive event.

Control measures

Control measures are steps or mechanisms that can reduce or eliminate various threats for organizations. Different types of measures can be included in disaster recovery plan (DRP).

Disaster recovery planning is a subset of a larger process known as business continuity planning and includes planning for resumption of applications, data, hardware, electronic communications (such as networking) and other IT infrastructure. A business continuity plan (BCP) includes planning for non-IT related aspects such as key personnel, facilities, crisis communication and reputation protection, and should refer to the disaster recovery plan (DRP) for IT related infrastructure recovery / continuity.

IT disaster recovery control measures can be classified into the following three types:

  1. Preventive measures – Controls aimed at preventing an event from occurring.
  2. Detective measures – Controls aimed at detecting or discovering unwanted events.
  3. Corrective measures – Controls aimed at correcting or restoring the system after a disaster or an event.

Good disaster recovery plan measures dictate that these three types of controls be documented and exercised regularly using so-called “DR tests”.

Strategies

Prior to selecting a disaster recovery strategy, a disaster recovery planner first refers to their organization’s business continuity plan which should indicate the key metrics of recovery point objective (RPO) and recovery time objective (RTO) for various business processes (such as the process to run payroll, generate an order, etc.). The metrics specified for the business processes are then mapped to the underlying IT systems and infrastructure that support those processes.

Incomplete RTOs and RPOs can quickly derail a disaster recovery plan. Every item in the DR plan requires a defined recovery point and time objective, as failure to create them may lead to significant problems that can extend the disaster’s impact. Once the RTO and RPO metrics have been mapped to IT infrastructure, the DR planner can determine the most suitable recovery strategy for each system. The organization ultimately sets the IT budget and therefore the RTO and RPO metrics need to fit with the available budget. While most business unit heads would like zero data loss and zero time loss, the cost associated with that level of protection may make the desired high availability solutions impractical. A cost-benefit analysis often dictates which disaster recovery measures are implemented.

Some of the most common strategies for data protection include:

  • backups made to tape and sent off-site at regular intervals
  • backups made to disk on-site and automatically copied to off-site disk, or made directly to off-site disk
  • replication of data to an off-site location, which overcomes the need to restore the data (only the systems then need to be restored or synchronized), often making use of storage area network (SAN) technology
  • Hybrid Cloud solutions that replicate both on-site and to off-site data centers. These solutions provide the ability to instantly fail-over to local on-site hardware, but in the event of a physical disaster, servers can be brought up in the cloud data centers as well. Examples include Quorom, rCloud from Persistent Systems or EverSafe.
  • the use of high availability systems which keep both the data and system replicated off-site, enabling continuous access to systems and data, even after a disaster (often associated with cloud storage)

In many cases, an organization may elect to use an outsourced disaster recovery provider to provide a stand-by site and systems rather than using their own remote facilities, increasingly via cloud computing.

In addition to preparing for the need to recover systems, organizations also implement precautionary measures with the objective of preventing a disaster in the first place. These may include:

  • local mirrors of systems and/or data and use of disk protection technology such as RAID
  • surge protectors — to minimize the effect of power surges on delicate electronic equipment
  • use of an uninterruptible power supply (UPS) and/or backup generator to keep systems going in the event of a power failure
  • fire prevention/mitigation systems such as alarms and fire extinguishers
  • anti-virus software and other security measures

How Organizations Are Authenticated for SSL Certificates

Certification Authorities (CAs) are trusted third parties that authenticate customers before issuing SSL certificates to secure their servers.

Exactly how do CAs authenticate these organizations? And where are the rules that determine what CAs must do during authentication?

The Rules on Customer Authentication

In the past, there were no common rules applicable to CAs as to minimum steps required to authenticate a customer before issuing an SSL certificate. Instead, each CA was permitted to create its own authentication processes, and was only required to describe the process in general terms in its public Certification Practice Statement (CPS). In many cases, the CPS authentication description was vague and hard to understand, and some CAs were less diligent than others during authentication.

To raise the bar for customer authentication, CAs first developed new, common, more stringent authentication standards for their new Extended Validation (EV) certificates, which earn a favorable “green bar” user interface in most browsers and applications. These minimum authentication standards were detailed in the CA/Browser Forum’s 2008 Guidelines for The Issuance and Management of Extended Validation Certificates. In 2012, additional authentication requirements were added for all SSL certificates in the CA/Browser Forum’s Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates. See documents at https://www.cabforum.org/documents.html.

All public CAs must follow these same authentication rules when issuing SSL certificates.

Requirements Common to All Certificates

First of all, there are authentication and technical requirements applicable to all certificates. A CA must check the customer’s certificate signing request to make sure it meets minimum cryptographic algorithm and key size, and is not based on Debian known weak keys.

A CA must then check the customer and certificate data against a list of high risk applicants (organizations or domains that are most commonly targeted in phishing and other fraudulent schemes), and perform extra authentication steps as appropriate. The CA must also check the customer against internal databases maintained by the CA that include previously revoked certificates and certificate requests previously rejected due to suspected phishing or other fraudulent usage. Finally, the CA must confirm that the customer and its location are not identified on any government denied list of prohibited persons, organizations, or countries.

These basic CA checks are one major advantage of CA-issued certificates over self-signed certificates (including DANE certificates), which are not subject to any similar public safeguards.

Simplest Authentication – Domain Validation (DV) Certificates

After these basic checks, the simplest level of authentication occurs for Domain Validation or DV certificates. These certificates do not confirm the identity of the certificate holder, but they do confirm that the certificate holder owns or controls the domains included inside the DV certificate.

DV validation is usually performed using an automated method where the CA sends an email message to the customer containing a confirming URL link and using a limited list of emails addresses. The only email addresses that CAs are allowed to use for domain confirmation are:

  1. Any contact email addresses for the domain shown in the WhoIs record, or
  2. An email addressed to one of five permitted prefixes (admin@, administrator@, hostmaster@, postmaster@, and webmaster@) attached to the domain being confirmed.

The idea is that only a customer that owns or controls a domain can receive and respond to email messages sent to these email addresses. Domain control can also be established by a manual lookup in WhoIs, by requiring the customer to make an agreed-upon change to a web page secured by the domain, or by obtaining confirmation from the domain name registrar.

Some CAs take additional steps during DV validation, such as making an automated telephone call to the customer’s phone number and checking the customer’s credit card number, in order to establish potential points of contact for future reference. The CA is also permitted to insert a country code in the certificate along with all verified domains if the CA has confirmed that the customer’s IP address is associated with the country, but no identifying information about the customer (such as organization name) is included in DV certificates.

The tests listed above – Requirements Common to All Certificates and requiring proof of domain ownership or control in the DNS – are one major advantage of CA-issued certificates over self-signed certificates (including DANE certificates), which are not subject to any similar public safeguards.

Next Level of Authentication – Organization Validation (OV) Certificates

The next level of customer authentication is for Organization Validation (OV) certificates. OV certificate validation involves all the same steps for DV certificates, plus the CA takes additional steps to confirm the identity and location of the customer and includes the information inside the OV certificate before issuance.

During OV validation, the CA first looks for data confirming the customer’s identity, address, and phone number in any of the following information sources: a third party business data base (such as Dun & Bradstreet), a government business registry (such as the Corporation Commissioner’s website), a letter from a licensed attorney or accountant vouching for the customer’s identity (the CA must follow up to confirm the attorney or accountant is duly licensed and actually signed the letter), or a site visit to the customer by the CA or its agent. A CA can also use a copy of a customer document, such as a utility bill, credit card bill, bank statement, etc. to confirm the customer’s address, but not the customer’s identity.

The CA must then confirm that the contact person who is requesting the OV certificate is really connected with the customer organization (e.g., an employee or agent) that will be listed inside the certificate. To do this, the CA typically places a telephone call to the contact person using a telephone number for the organization found in a public data base (not using a phone number offered by the contact person, which might simply be the person’s mobile phone number). If the contact person representing the organization can be reached through the organization’s main phone number, the link is confirmed and the CA can presume the contact person has authority to order a certificate for the organization.

Other alternatives for confirming the link between the customer contact person and the organization include mailing a letter or sending a courier package to the person at the organization’s official address with a password that is then entered by the contact person on the CA’s order page, or placing a call to a senior official with the organization to confirm the authority of the contact person to order a certificate.

Once OV authentication has been completed, the CA will include an organization name, city, state or province, and country inside the OV certificate, along with one or more domains the customer owns or controls.

Highest Level of Authentication – Extended Validation (EV) Certificates

EV certificates represent the highest level of authentication, and are typically rewarded with a favorable user interface by the browsers and applications.

For EV authentication, the CA must conform that the customer’s organization is properly registered as “active” with the appropriate government registry and can also be found in a third party business data base. For companies in existence for less than three years which cannot be found in a business data base, the CA must take additional steps such as requiring an attorney or accountant letter or confirming the customer maintains a bank account.

In addition, the CA must contact the person who signs the Subscriber Agreement (i.e., in most cases, the person who clicks “I Agree” on the CA’s website) to verify the signature, and must independently confirm the name, title, and agency of that person within the organization, typically by finding the person listed as a company officer in a public business data base, by calling the organization’s HR department through a publicly listed telephone number for the organization, or by receiving a confirming attorney or accountant letter that is independently verified.

The CA must further check the name and domain submitted by the customer to make sure it does not contain a mixed data set (for example, to make sure the Cyrillic letter “a” has not been inserted in place of the Western letter “a”, which can be used for fraudulent purposes). Finally, the EV vetter must compare all authentication data to confirm consistency (e.g., make sure all customer data contains the same address, etc.), and conduct final cross-correlation and due diligence to look for anomalies. The entire vetting file must then be independently reviewed and approved by a second vetter before an EV certificate is issued.

Three percent of all OV and EV vetting files must be reviewed each year by another authorized vetter for quality control purposes.

When an EV certificate is issued, it contains not only the standard OV information fields (organization name, city, state or province, and country), but also the entity type (corporation, government agency, etc.), location of incorporation (state, province, or country), and government registration number so that the customer is uniquely identified to the public. Most browsers display the EV certificate organization name and country of incorporation in the user interface “green bar” visible to the public at the customer’s encrypted https:// web page secured by the EV certificate.

Conclusion: CA Authentication Is a Valuable Safeguard for the Public

Because CA certificates are issued from roots trusted by all the major browsers (which provide trust indicators to the public at the customer’s secure pages), it is important that a third party first verify the technical strength of the certificate and the customer’s ownership and control of the domains included in the certificate (for DV certificates), as well as the identity of the customer (for OV and EV certificates). This helps prevent web site impersonation and fraud.

In addition, all CA-issued certificates protect the public in another way — by triggering a browser warning if the domain contained in a certificate does not match the domain visited by the public (so a warning is displayed if someone visits https://www.angel.com and the certificate securing the site is issued to a different domain, http://www.evil.com). This helps ensure that a user won’t be fooled as to the true domain of the secure web page being visited.