Paragon Software Group Releases NTFS for Mac® 14 – the Industry’s Fastest Solution for Full Access to Windows Partitions on OS X 10.11 El Capitan

Paragon NTFS for Mac 14 is the industry’s fastest driver giving OS X full read and write access to Windows-formatted NTFS partitions. The new version is fully compatible with Apple’s new operating system OS X 10.11 El Capitan, which was launched yesterday, and still supports all versions back to 10.8 Mountain Lion. Internal tests show Paragon NTFS for Mac 14 is the only solution on the market to match the file transfer speed of Apple’s native driver on SSDs.

Paragon NTFS 14 for Mac achieved 700MB/sec (write) and 800MB/sec (read) on the internal SSD of a MacBook Pro. It also performs as well as HFS+ with external storage: 250MB/sec write and 240MB/sec read on a two-SSD RAID device, and 210MB/sec write and 210MB/sec read on an ordinary external drive (2TB USB 3.0 Seagate Expansion Drive 3.5″).

To ensure a higher level of security, El Capitan delivers a new protection feature. System Integrity Protection prevents modifications to certain system files, folders and processes. This protects components on disk and at run-time, only allowing system binaries to be modified by the system installer and software updates. Code injection and runtime attachments to system binaries are no longer allowed. Paragon NTFS for Mac 14 is fully compatible with Apple’s new security policy ensuring fast, hassle-free and safe access to NTFS partitions from OS X 10.11 El Capitan.

Once the program is installed, the user can get started right away: conveniently navigate contents and read, edit, copy or create files and folders. The program guarantees advanced support of NTFS file systems and provides fast and transparent read/write access to any NTFS partition under OS X 10.11. Paragon has been the leader in cross-platform storage software for 20 years, delivering proven maximum performance, stability and security for cross-platform work between Mac, Windows and other operating systems.


Key functions:

  • Full OS X 10.11 El Capitan support.
  • Ultra-quick read/write access to NTFS files in OS X El Capitan.
  • No limit to file or partition sizes (within NTFS and OS X constraints).
  • Supports special NTFS functions in OS X El Capitan such as resource forks, hardlinks, symlinks and file permissions (POSIX file attribute).
  • The transfer rate during file operations on NTFS partitions is just as good as it is for operations on native HFS partitions.
  • Unparalleled stability – even during peak system utilization!
  • Simply install it and go to work. No further system adjustments are necessary once it has been installed.


System requirements

  • OS X El Capitan 10.11 and back to 10.8.

Availability:

Paragon NTFS for Mac 14 is available for immediate download for $19.99 at http://www.paragon-software.com/home/ntfs-mac/index.html All users who purchased NTFS for Mac 12 will get a free upgrade to version 14. They will receive an invitation to upgrade via email or they can view their real-time upgrade status at http://www.paragon-software.com/landing-pages/2015/ntfs-mac-el-capitan-upgrade/index.html

White Paper: Maximize Data Protection for Physical and Virtual Systems

For IT organizations of all sizes, storage continues to be the primary resource driving operating costs. Entrenched at the top of the list of storage problems is data protection, which includes the perennial problem of data backup. IT groups at large enterprises continue to struggle with meeting Service Level Agreements (SLAs) that continue to tighten both the Recovery Point Objective (RPO)—the maximum amount of data measured in time prior to the disruption that could be lost in the recovery process—and the Recovery Time Objective (RTO)—the maximum period of time that it could take to recover. Meanwhile, IT administrators at small to medium sized business (SMBs), who often have less storage expertise and tighter budgets, struggle with more prosaic issues, such as choosing technologies to simplify backup processing in shrinking backup windows.

http://easycaptures.com/fs/uploaded/220/thumbs/6033906091_b.jpg

To help IT at all types of sites deal with data protection issues, Paragon Drive Backup 10 Server creates an exact image copy of a live disk drive on physical and virtual servers and workstations running a Windows-based operating system. Using multiple snapshot technologies, Drive Backup 10 Server is able to maintain transactional integrity of the file system structures on the disk, including all Windows OS files, configuration files, and databases.

More importantly, IT does not have to purchase extra-cost options to leverage Drive Backup 10 Server in a virtual environment, such as VMware® vSphere or Microsoft Hyper-V™. Running Drive Backup Server, an IT administrator can go beyond simply restoring a backup image as a virtual machine application or as a collection of Windows OS files. By applying Paragon’s 3rd generation of Adaptive Restore technology, an IT administrator is able to insert new drivers into an image and create a bootable volume for an entirely different physical or virtual environment.

Of particular importance for IT at SMB sites, Paragon Drive Backup 10 Server is very easy to deploy and use. In addition, the optional Paragon Remote Management (PRM) application provides a single management point for launching and running scripts developed with Paragon Drive Backup on systems across in a distributed enterprise environment. As a result, PRM is especially useful when running multiple VMs in a virtual environment, as IT administrators can rapidly backup and restore systems in minutes to garner an immediate return on investment.

More importantly, as CIOs focus on the virtualization of systems and storage as the magic philter to extract higher resource utilization and lower management costs, IT administrators are now working with limited numbers of abstract device pools rather than multiple instances of proprietary devices. VMware® vSphere™ 4 typifies such an environment with multiple heterogeneous servers running ESX® or ESXi™ hosting multiple virtual machines (VMs) running a variety of server and desktop operating systems. In the process of simplification, however, multiple levels of logical abstraction and resource redirection can also obscure and complicate important IT operations.

Among the hardest hit IT operations are those associated with file-level data protection. That has the potential to turn IT’s magic philter for gaining operating efficiency into a poison pill for compliance with regulatory mandates to secure and maintain critical business data. Fortunately, a key characteristic of a virtual environment is the encapsulation of VM logical disk volume as a single physical disk files. This representation makes image-level backups faster than traditional file-level backups and enhances restoration as virtual disks can be restored as either a whole image or individual files. That’s why, many general purpose backup packages integrate with VMware Consolidated Backup (VCB) to provide imaging-based backup.

Nonetheless, VCB assumes a shared-disk storage infrastructure via a Fibre Channel or iSCSI SAN. For SMB sites functioning well with direct attached storage (DAS) and simple file sharing via networked attached storage (NAS) or FTP storage servers, the need to introduce a SAN in order to protect data in a virtual operating environment presents a prodigious stumbling block.

Paragon Backup 10 Server, however, provides IT with dedicated image-level data protection for VMs running a Windows OS on either a VMware hypervisor without VCB integration or a Microsoft Hyper-V environment. SMB sites can use Paragon Drive Backup 10 Server to provide VMs with full data protection without a major storage infrastructure change from DAS to SAN. What’s more, Paragon enhances the value of Drive Backup 10 and PRM for SMB sites with support for the ESXi hypervisor under the free public license as well as with a full paid license. With more servers featuring ESXi firmware bundles, the ability to work with Windows-based VMs hosted on this hypervisor with the public license is particularly important for IT at SMB sites.

Download the free White Paper.

Paragon Software Enables Migration of Windows Systems to a Virtual Environment

In keeping with its commitment to enabling seamless transitions between operating systems, Paragon Software Group (PSG), :  the technology leader in innovative data security and data management solutions, today announced the debut of its Virtualization Manager 2009 Corporate Edition.

“Virtualization Manager 2009 Corporate is truly state of the art technology that saves our customers time and money while providing complete flexibility to migrate within both virtual and physical system environments.”
http://easycaptures.com/fs/uploaded/206/thumbs/6292481048_b.jpg
As IT executives look to conserve budget resources, server virtualization is proving to be an attractive option, offering benefits including, optimum resource utilization, reduction in hardware expenses, lower energy consumption, and the ability to reallocate IT personnel. Fewer servers also require less maintenance, thus, further reducing IT infrastructure costs.

Paragon’s new Virtulization Manager 2009 Corporate Edition was developed to help ease the migration process and enable businesses to maximize their current IT investments. A powerful tool that easily migrates a Windows-based system to a virtual environment, Paragon’s Virtualization Manager :  2009 Corporate Edition helps businesses maximize the utility of servers increasing their efficiencies while decreasing energy costs by up to 80 percent.

Virtualization Manager also easily migrates a Windows-based system to a different environment, whether it be a physical to physical (P2P) or physical to virtual (P2V) migration. Additional Virtualization Manager :  functionalities include: recovering an OS after an unsuccessful system virtualization by a third-party tool; creating a virtual clone of the physical system and saving it on a network for backup purposes; providing a failsafe that ensures business continuity; and enabling the installation of several different operating systems on one computer.

“Paragon Software is committed to offering the most powerful and easy to use system migration and data conversion tools on the market,” noted Tom Fedro, president of Paragon Software Group. “Virtualization Manager 2009 Corporate is truly state of the art technology that saves our customers time and money while providing complete flexibility to migrate within both virtual and physical system environments.”

Key features and benefits include:

—  Full Windows OS Support – Guaranteed support for any Windows OS post-Win2K
—  P2V Copy – Migrate a physical system to a virtual disk
—  P2V Restore – Migrate a physical system backed up with Paragon software to a virtual disk
—  P2V Adjust – Recover OS startup ability after unsuccessful virtualization by a 3rd party tool
—  P2P Adjust – Recover OS startup ability after system migration to a different hardware platform
—  Hot Copy Technology – Online processing of locked hard disks for easy
—  Smart Driver Injector – Makes adding new drivers smooth and simple
—  Virtual Disk Map – Preview changes before applied
—  Flexible Destination Choices – Choose any destination to place virtual disks
—  Partition Auto-Resize – Easily set desired partition size when creating virtual disks
—  2 Types of VMWare Disks – Create either IDE or SCSI disks for VMWare
—  Disk File Split for VMWare – Automatically cut virtual images to files of 2 GBs each for improved data management

Virtualization Manager is now available from Paragon Software; please visit www.paragon-software.com/business/vm/ to learn more.

Review by: VMblog.com

The Power Of Partitioning

10/9/09 By: Christian Perry

A segment of storage in almost every data center skirts by every day without doing much work. But through the use of partitioning, it’s possible to get that storage back to work and keep it there.

“Properly partitioned hard disks will allow the data center to maximize its storage investments by reallocating unused disk space and consolidating data, resulting in the need to purchase less new storage,” says Jim Thomas, technical services manager for Paragon Software Group (www.paragon-software.com). “Increased system performance can also be noticed through defragmentation of partition contents and the MFT [Master File Table].”
http://easycaptures.com/fs/uploaded/205/thumbs/1617587124_b.jpg

Key Points

• Partitioning can help data centers deploy previously unused storage space for applications, testing, and other tasks by dividing hard drives into separate storage areas.

• Although the actual partitioning process is simple, experts recommend planning before conducting partitioning sessions to determine the best use for the technology and prepare for potential changes.

• Partitioning can force drive letter assignment changes, conflict with existing file system problems, and cause other issues, so data center personnel should expect the possibility of some problems with the technology.

Division Lesson

At its core, partitioning is the process of dividing hard drives into separate storage areas, or partitions, to make use of previously unused disk space. According to Curtis Breville, data storage evangelist for Crossroads Systems (www.crossroads.com), partitioning was originally designed to dedicate part of a disk drive to a specific purpose to allow the data to be physically close together and speed up access to data on a device that used random-access searching.

“Partitioning also allowed for better use of disk space and kept one application from taking away space needed by another. With astute planning and accurate growth prediction, each application would have the right amount of storage, and there would be less wasted disk [space],” Breville says.

Today’s flexible partitioning technologies continue to build on that performance-enhancing tradition, delivering automated and unattended operations, RAID support, dynamic disk support, Windows-based tools for on-the-fly partitioning, and even bootable recovery media to enable partitioning operations while systems are offline. Also relatively new is thin provisioning, which allows partitioning without the need to physically allocate storage at initial setup.

Partition Plan

Data center managers who neglect to implement partitioning for fear of disrupting delicate system environments might be pleased to learn that partitioning can occur while systems are online. However, before moving ahead with partitioning, experts recommend some basic planning procedures to ensure that the technology is working to its full potential.

“Typically, after the goals and business case for partitioning have been established, history performance data on existing servers and applications is collected to assist in the planning process as well as information on the importance of each application to the business,” explains Gary Thome, director of strategy and architecture for Infrastructure Software and Blades at HP (www.hp.com). “Architectures and partitioning software are chosen based on the goals of the project, along with plans for management, high availability and disaster recovery, and backup and security procedures.”

Thome also recommends determining the metrics the data center uses (or will use) to measure success. For example, is IT judged based on response time to end users? On percent of unplanned downtime? On costs of capital expenditures or of the power bill? Also, data centers planning to implement partitioning should gather utilization data from their existing servers, storage, and applications, Thome says.

The actual process of partitioning new or existing drives is surprisingly simple. “Most partitioning utilities show each hard drive in the system with graphic representation of the partition layout. After installing the partitioning software, an operation such as resizing partitions is usually as easy as dragging the border of a partition to the desired size or entering the desired size of the partition in numerical form, before allowing the application to carry out the partitioning operations behind the scenes,” Paragon’s Thomas says.

Rolling partitioning into production—that is, moving programs and data into a partitioned environment—can be accomplished with tools that automate the transfer of applications from physical servers to virtual servers, Thome says. From there, data centers can use ongoing monitoring and capacity planning to ensure the optimal distribution of workload and resources.

Tread Carefully

Although partitioning is generally a safe process, it’s not without pitfalls. For example, Thomas warns that when booting a server from recovery media, drive letter assignments might display differently than how they appeared in the host operating system. Further, he warns that file system errors and bad sectors can cause numerous problems, so it’s wise to check for physical errors and file system errors before creating or modifying partitions.

James Wilson, product manager for HP StorageWorks, says that another concern with storage cache partitioning is that the time required to move cache is variable and does not address short-term hot spots or sudden changes in workload. Further, the cache being moved is not available to any partition from the start of the move until the cache is reassigned to the new partition.

Despite these potential drawbacks, partitioning is here to stay in data center environments as an effective method for increasing operational efficiency. “Partitioning is like cutting a child’s birthday cake,” Thome says. “As long as you plan ahead and measure carefully, everybody is going to be happy.”