Search Results | HighPoint-tech.com
top of page

Search Results

66 items found for ""

  • Mastering NVMe Hot-plug: Navigating Challenges and Ensuring Safe Removal

    Hot-Plugging, or the ability to insert and remove a device without having to power down one’s system, has been an important feature for users who are attempting to maximize their total uptime on often times mission critical systems. For decades this has been a standard feature supported by a wide variety of data storage devices like SAS/SATA Hard Drives (HDDs) and Solid-State Drives (SSDs). In addition, this feature is also a mainstay with most USB devices. Given that this feature has been around and is widely adopted by many different protocols, one might be forgiven in thinking that it would be standard with NVMe storage, however that is not the case. Unlike the SAS/SATA interface, which was designed with hot plugging in mind, NVMe instead, was designed originally to bypass the performance limitations of the aforementioned interfaces. That is not to say that that NVMe media doesn’t have this feature at all. While NVMe hot-plug support isn’t as standardized as it is for SAS/SATA drives, there are some NVMe SSDs that do have the ability to hot-plug. However, this is no simple task; and this support is contingent on the following main factors. It is important to note, that all of these factors must be met; and if one is missing, NVMe hot-plug will not be supported. 1. System Requirements for NVMe Hot-plug: The system/motherboard the drives will be connected to must supports NVMe hot-plug. Typically, this information can be found with the associated user/technical manuals. 2. Operating System Support for NVMe Hot-plug: Certain operating systems and drivers are designed to be able to handle hot plugging of NVMe devices. 3. Drive Requirements for NVMe Hot-plug: Hot Plugging is a feature typically supported by DC or Enterprise grade NVMe drives. As such this is a feature supported by U.2/U.3 NVMe drives. M.2 drives, were not designed to support hot-plugging and since most of them are client grade, they don’t have the support to begin with. 4. PCIe Switch Solution for NVMe Hot-plug: NVMe drives require dedicated amounts of resources. A PCIe Switch will ensure that system CPU provides the resources to each channel where an NVMe could go. Assuming you meet the criteria of all three of the requirements, this brings up the initial question, is it safe to simply remove the NVMe SSD from the system and replace it with a new one? In short, no. While it is possible to remove and replace an NVMe SSD from a running system, there are a series of best practices for NVMe Hot-Plugging that need to be taken in order to avoid any issues. Primarily this requires understanding of both the device and host side. In this scenario, the NVMe SSD drive would be a member of the device side and the host side refers to the system/OS. Before removing the drive, you must first notify the host that you would like to remove the drive. This is because the host side is responsible for managing the overall files system and structure. It is also in charge of maintaining data integrity. Once the request has been made, the host side will determine whether the drive is in a safe state. If approved, the host side will then notify the user that it is safe to remove the drive. The notification step is necessary for ensuring both data integrity and the smooth removal of the SSD. If one were to skip this step, its possible that the drive they removed could have been in the middle of read/write operations, meaning it wasn’t safe to eject. This can lead to serious issues such as data loss or total corruption of the drive. In order to facilitate the smooth removal of drives and avoid data loss, HighPoint has integrated an unplug feature. This easy-to-use feature can be found using the HighPoint NVMe RAID Management Software, chiefly the WebGUI (Web-Based Graphical User Interface) or the CLI (Command Line Interface). When it comes time to remove the drive, all the user need do is simply use this feature to notify the host of the intent to eject the drive. The user should then see a notification informing them that the drive is now removable.

  • PCIe Gen5 NVMe RAID Series Proactive Environmental Solution

    Keep NVMe Devices Running Strong with HighPoint’s Comprehensive Storage Health Monitoring, Management and Analysis suite HighPoint’s comprehensive Storage Health Monitoring, Management and Analysis suite provides a variety of pro-active storage health and security features and real-time monitoring toolsets that enable administrators to keep close tabs all hosted NVMe media and make sure temperature, electrical characteristics and S.M.A.R.T. (self-monitoring analysis and reporting technology) attributes are inline with the manufactures recommended specifications . An intelligent, pro-active alert system, comprised of on-device LED indicators, an audible alarm and software-based event trackers and logging services can be configured to keep Administrators apprised of any environmental changes whether on site or in the field. Left to its own devices, HighPoint PCIe Gen5 and Gen4 NVMe RAID AICs and Adapters will intelligently monitor each hosted SSD and RAID configuration to ensure everything is running smoothly. However, customers that want full manual control, or need to tune a storage configuration for a particular application or computing environment can install HighPoint’s NVMe Management Suite, which includes an arsenal of pro-active monitoring tools and features our Intelligent Throttling Alert System, and Active Sensor Tracking and Logging Services. Active Sensor Tracking & Logging Services HighPoint’s WebGUI now incorporates a real-time NVMe Sensor logging system which tracks and records the temperature, fan-speed and electrical characteristics of the adapter and each hosted SSD over time, and presents the data via a series of simple plotted curves and line charts. These records can be exported as needed, and can help administrators narrow the scope of troubleshooting tasks by identifying potential faults and at-risk storage media, and implement preventative measures to maximize the lifespan of the RAID array and maintain optimal performance. Intelligent Throttling Alert System with Customizable Temperature Thresholds SHI (storage health inspector) is a key feature of HighPoint NVMe solutions. The solution is integrated into the WebGUI and CLI (command line utility) interfaces and enables administrators to monitor the temperature of each NVMe device in real time. Viewing the details for each SSD enables threshold configuration; temperatures can be adjusted to correspond with the manufacturer’s official specifications, and instructed to notify one or more administrators whenever a critical threshold is crossed. The interface features full manual fan control for those that need to fine tune a configuration solution for a specific platform. Administrators can select from 5 settings including, an option to fully disable the fans; ideal for workflows that require a silent work environment. Integrated LED Indication and Audible Alerts Each Rocket Series Gen5 NVMe AIC and Adapter is equipped with a series of LED indicators that track the status and operating condition of hosted NVMe media and/or RAID configurations, the solution’s PCIe connectivity status, behavior/status of the PCIe IC. The hardware sensors and indicators are designed to work in conjunction with a range of services and features associated with Highpoint’s Storage Health Monitoring, Management and Analysis suite. LED indicators are built into the ventilated PCIe bracket, and use simple color-coding (Green =  Good, Yellow = Warning, Red = Error/Failure) and flash-patterns to signal a variety of status and operational data. The audible alarm (aka “Warning “beeper” is mounted directly to the AIC/Adapter PCB. It is capable of notifying administrators when temperature thresholds are breached, RPM of cooling apparatus drops below recommended levels, or a device/RAID array has encountered an error or entered a failure state. Though enabled by default, the alarm can be disabled using the WebGUI or CLI for applications that require a silent working environment. Multi-Tiered Notification Systems HighPoint’s NVMe Management Suite continually operates in the background to keep tabs on NVMe storage assets. If anything should go awry, the Intelligent Throttling Alert System can notify administrators via an audible alarm, warning messages, an event log and Email notification system. These features can be configured independently to meet the requirements of platform and application. Audible Error Alarm – the alarm will sound whenever a temperature threshold is exceeded, fan-speed drops to0 low, or any S.M.A.R.T. attribute triggers an error or warning. The alarm can be enabled or disabled at will using the WebGUI or CLI interfaces. Event Log – The WebGUI and CLI will automatically log any administrator action, warning and alert issued by SHI or the host controller. The log can be easily exported as a text document for troubleshooting purposes. Email Notification – Customers can instruct the WebGUI and CLI to compose and send Email Notification to one or more Administrators whenever a temperature threshold is crossed, and if any Alert or Warning is issued by SHI or the Host Controller. In summary In conclusion, HighPoint’s Storage Health Monitoring, Management and Analysis Suite is a robust, comprehensive toolset designed to optimize the performance, reliability and endurance of NVMe RAID storage for industrial, server and datacenter applications. The solution utilizes an array of hardware sensors to monitor and record the environmental conditions and operating status of hosted NVMe devices in real-time, and relay this data to the administrator via a series of simple LED indicators, audible alarms, and intuitive graphical interfaces. The suite can be custom-tailored to compliment service and maintenance workflows (both on-site and remote) for a wide range of hardware & software environments, via configurable temperature thresholds, manual fan speed settings, and programmable alert services such as event logging and email notification.

  • Why Cooling Matters on NVMe vs SATA

    In our previous blog we compared NVMe to SATA drives and explored the variety of advantages that NVMes drives can provide. However, it must be noted, these advantages come at a price: Specifically, NVMe media generates a significant larger amount heat than a standard SAS/SATA drive. Consequently, if one isn’t careful, they might end up with an NVMe drive that is overheating. So, in order to effectively use an NVMe, the user will need to be sure to manage their drive effectively and take necessary precautions to prevent this from happening. What causes the NVMe to generate such heat? Before delving into the consequences of what happens when an NVMe starts overheating, one should first understand the reason why NVMes generate so much more heat than their SATA counterparts. NVMes come in many different shapes and sizes, with one of the most popular being the M.2 form factor. Often times being described as looking like a stick of gum, these are small, compact drives, whose limited space for heat dissipation, can lead to increased temperatures if not properly managed. In addition, these thin drives are densely packed with a variety of essential components like NAND chips which generates its own amount of heat. However, most importantly the NVMe’s high-speed transfer bandwidth. This increased activity leads to higher power consumption and heat generation. What are known NVMe overheating effects? One of the first signs that an NVMe drive is getting too warm is that it’s performance will start to degrade.  In order to prevent this, NVMe drives will perform an action known as thermal throttling. This automatic process is activated first by the drive reaching a certain temperature threshold, once reached, the drive will purposely reduce its overall performance which helps reduce the total amount of heat generated by the drive. While the performance loss may be an annoyance to some, it should be noted that thermal throttling is an important safety feature built into the drives in order to protect the onboard components and overall integrity of the drive. NVMe drives are built to operate within a specific temperature range and when operating at or above the upper threshold, it’s possible to permanently damage the NVMe drive. Preventing NVMe Thermal Throttling Simply put, in order to avoid thermal throttling, one must make sure to keep their drives operating in within its acceptable threshold. There are two main factors to look at in regards to NVMe thermal management. The first is the overall system environment, generally, the more space and airflow a system has, the lower the overall ambient temperature will be. This in turn should help the NVMe stay in an acceptable temperature range. The second factor to consider is the drive itself. While many NVMes offer their own standalone heatsink component, these only are able to be used for single, specific NVMes and are designed only to be when the NVMe is directly connected to the motherboard. To handle this, HighPoint has been continually researching and developing newer, more efficient cooling solutions to tackle the issue of thermal throttling. Our latest generation of cooling solutions are designed to protect any and all installed NVMe drives and keep them cool under full load. Learn more about HighPoint’s Intelligent Cooling Solution and how it addresses thermal throttling, here.

  • Why NVMe rather than SATA

    Intro As technology evolves, the need for high speed, high-capacity storage is only becoming more prevalent. While SATA based drives use to be the main go-to data storage solutions, nowadays they have largely been supplanted and replaced by newer NVMe drives.  In this article, we’ll take a closer look at NVMe and why it’s become the gold-standard and future for most storage solutions. AHCI Protocol vs. NVMe: SATA Limitations for SSDs While both are flash memory-based storage devices, the actual interface between NVMe (Non-Volatile Memory Express) and SATA solid state drives (SSD) differ significantly. SATA SSDs rely on the AHCI (Advanced Host Controller Interface) protocol. This protocol operates as an interface between the SATA controller and the storage devices connected to it. Originally, AHCI was designed for hard drives and later adapted to SSDs when they first were introduced. While this worked well for early model SSDs, as they continued to become more advanced, the limitations of AHCI became more and more apparent. For example, ACHI only has a single command queue and is limited in its overall queue depth. This restricts the SSD’s ability in handling multiple commands simultaneously. As a consequence, this can lead to increased latency during SSD operations. Another issue is the SATA interface itself. The current version of the interface is SATA 3 which has a bandwidth cap of 6Gbs. Like the AHCI protocol, this speed cap was fine for traditional hard drives as due to their spinning platters, they were physically incapable of maxing out the bandwidth provided to them. However, with the advent of SSDs and their lack of moving parts, it quickly became apparent that this interface was now acting as a bottleneck and preventing maximum performance. Benefits of NVMe SSDs This all changed with the introduction of NVMe. Unlike the SATA interface, NVMe can communicate directly with the system CPU via the PCIe (Peripheral Component Interconnect Interface). Overall, this reduces total latency and overhead. In turn, this results in much higher performance than SATA is capable of, with it being able to provide bandwidth in the thousands of MBs opposed to the hundreds that SATA can support. Additionally, as the PCIe interface continues to update, this increases the total data transfer rate it’s able to support. PCIe Interface and SSD Performance NVMe Command Queue Advantages Another added advantage of NVMe is the increase in command queue and the number of commands that can be sent per queue. As previously mentioned, SATA is limited to a single command queue. NVMe on the other hand can support up to 64K queues and is able to send 64K commands per queue. Due to their massive increase in performance, NVMes has become the go-to storage solution for a variety of different applications. This can range from data center usage which requires high-capacity, sustained and continuous performance to projects that are system intensive and require ultra-fast speeds. NVMe is particularly well suited for AI and ML Workloads. These demanding applications require high-speed, high-density storage solutions. During their initial phases, most AI algorithms will need to process massive datasets of both structured and unstructured data. By using NVMe, one is able to provide storage that is able provide the fast access speeds which reduces the total time required for this stage. Future of Storage: NVMe SSDs Why choose NVMe over SATA? By now, the answer should be obvious. With NVMes ability to provide high-speed, low-latency storage, and reduce total overhead it’s obvious which storage option should be chosen for projects looking for the most fastest solution possible.

  • Bootable RAID & Drive Support via LACS Binary Driver Solutions

    Unlike other solutions that require a binary driver for bootable applications, HighPoint’s LACS (Linux Auto Compilation Solution) was designed to streamline and automate the entire Linux setup and installation process. Provided the host platform has an internet connection, the administrator need only execute a single command line to activate the installation process. Once initialized, LACS will connect to the Backend Server, download all necessary files; installation scripts and the device driver that matches the target distribution, and execute the required commands in the background. In addition, HighPoint’s Open-Source NVMe driver package has been incorporated directly into the LACS workflow. Administrators no longer have to install additional software manually after the OS is up and running. The Open-Source package allows LACS to verify and update the active device driver to correspond with any fixes or patches that are available for the host OS. How it Works LACs enables even the most novice Linux Administrator to seamlessly integrate HighPoint NVMe RAID solutions into mainstream Linux distributions. LACS was designed to ensure that storage hosted by HighPoint product or solution remains fully operational whenever a new kernel is installed or when the distribution is updated or patched. Installation could not be simpler – administrators need only execute a single command line; everything else is handled by LACS. The system automatically checks our secure, dedicated online database for updates whenever the Linux platform is booted, and will automatically recompile driver support as needed to ensure NVMe storage media is readily accessible. The system has been continually refined over the years to further streamline and automate the update process while incorporating new product lines and storage technology. Robust Self-Monitoring, Update & Troubleshooting System If LACS determines that the host Linux OS is not compatible with the active driver and available updates, the service will immediately instruct the LACS network to request a new Binary driver. This process enables Highpoint’s dedicated LACS engineering team to expedite the development process and ensure the RAID AIC is in sync with the customer’s computing environment. Any errors encountered during installation, monitoring or update processes are immediately logged by LACS, and can be easily retrieved for examination by our Support Department. This automated process was designed to streamline troubleshooting and information gathering when submitting support inquiry be reducing the back-and-forth between the customer and service provider, and ensures all necessary data is on hand for immediate analysis. Supported Distributions · Centos · Debian · Fedora · RHEL · ROCKY Linux · Ubuntu

  • HighPoint SafeStorage Solution Adheres with TCG/OPAL SED Technology

    NVMe storage and connectivity solutions are frequently deployed to satisfy the stringent performance and reliability requirements of industrial, media and AI applications designed to process large volumes of sensitive data. Securing this data from prying eyes, while protecting the privacy of end user and corporate customers alike is of critical importance. A such, disk encryption technology is quickly become an essential component of storage solutions designed to address these workflows. HighPoint’s SafeStorage solution was developed to work in conjunction with state-of-the-art SED technology that has been widely adopted by mainstream NVMe devices and is based on the OPAL SSC TCG specifications. It is designed to protect data assets when physical drives are misplaced or stolen by preventing unauthorized access to stored data. First introduced with our PCIe Gen4 SSD7580C 8-Channel U.2/U.3 NVMe RAID HBA, SafeStorage can be applied to both single-disk and RAID configurations at the disk level, and administered via our universal management and monitoring suites. And unlike software-based services which rely on CPU resources, SafeStorage initiates encryption at the drive level to minimize the performance impact on the host platform. Unified & Streamlined RAID & Storage Encryption Solution HighPoint SafeStorage is a unified NVMe Storage Encryption Solution developed to accommodate both large-scale RAID arrays and individually configured SSDs, and can be scaled across multiple HighPoint PCIe AICs connected to the host platform. RAID volumes are encrypted at the time of creation and will automatically activate each disk member’s self-encryption capabilities. SafeStorage’s SED features are enabled at the hardware level, and require no unique driver or standalone software application; everything is managed directly by HighPoint’s universal RAID Management and Monitoring suite. The interface will automatically recognize SafeStorage compatible controllers and provide a new toolset known collectively as Disk & Enclosure Security. The toolset handles all SED related features and settings including setting up disk encryption, managing encryption keys and managing security policies. This streamlined lightweight approach to SED technology reduces complexity and minimizes the risk of software conflicts. Securely Lockdown Crucial Data from Unauthorized Access When Disk Security is enabled, your data is automatically locked down whenever the disk media is removed from the HighPoint storage or connectivity device. The SED technology will assign unique identifiers, known as “Keys”, in the form of Passwords, to both the HighPoint device (PCIe AIC) and each hosted SSD. Keys are automatically generated when the Disk Security feature is activated and can be configured/modified by the administrator as required. This system ensures your data cannot be accessed unless the keys match. Keys/Passwords are securely stored by the NVMe device and can be managed using HighPoint’s WebGUI and CLI management suites. Unless an Administrator changes a Key, disks/arrays can be accessed normally. However, Lockdown mode is enabled as soon as the disk is removed. Such disks cannot be simply moved to a separate HighPoint/Non-HighPoint Adapter or Enclosure for access. The “thief” would need to link the disk/array to the new HighPoint device and would need to enter the original Keys in order to do so. Cryptographic Erasure Changing or deleting encryption keys for SED capable disks will render all encrypted data indecipherable and thus, unrecoverable. SafeStorage allows administrators to delete and regenerate Keys (aka Passwords) as needed to ensure your encrypted data is always under lock and key. A few simple commands enable authorized administrators to immediately prep storage for resale, retirement or reuse. The Cryptographic Erase command replaces the encryption Key inside each drive; this makes it impossible to ever decrypt data stored on these devices. When executed, data is rendered inaccessible and considered cryptographically erased. The drives can then be reset to an unowned state, and reused once a new encryption key is generated. In addition, upon disabling the Disk Security feature, SafeStorage will automatically initiate the cryptographic erase command. The process is automated and takes only seconds to complete. Disk Security can be easily disabled at any time, using HighPoint’s WebGUI and CLI utilities. Summary SafeStorage’s innovative combination of TCG/OPAL compliant technology, scalable hardware-level encryption and a lightweight centralized management interface streamlines enables administrators to streamline the encryption process without degrading system performance or complicating workloads. Learn More Rocket 7628A – PCIe Gen5 x16 to 4-MCIOx8 NVMe RAID Adapter Rocket 7608A – PCIe Gen5 x16 to 8-M.2x4 NVMe RAID AIC Rocket 7528D – PCIe Gen4 x16 to 4-SlimSASx8 NVMe RAID Adapter SSD7749E - PCIe Gen4 x16 to 8-E1.S NVMe RAID AIC SSD7749M - PCIe Gen4 x16 to 8-M.2 NVMe RAID AIC SSD7580C 8-Channel U.2/U.3 NVMe RAID HBA HighPoint’s RAID Management and Monitoring

  • Beyond Traditional Storage: Advantages of PCIe NVMe AIC Drives for Modern Workloads

    At first glance, the compact single-AIC form-factor and blazing performance may seem the most obvious advantages. However, customers should not overlook the inherit strengths of a PCIe based storage solution. Some key factors to consider are outlined below; Direct to CPU Architecture, superior queue depth & parallelism, low-latency and ultra-compact form factor. Direct to CPU Hardware Architecture: Unlike SAS/SATA based storage, NVMe drives are designed to interface directly with the system’s CPU and GPU via the PCIe host bus, essentially bypassing the traditional storage architecture that may be impeded by layers of controller and adapters. While SAS/SATA storage rely on dedicated I/O processors to enhance performance, NVMe media was designed to interface directly with the host system's powerful AMD or Intel based CPU via PCIe connectivity. Though effective, I/O processors associated with SAS/SATA solutions are only capable of delivering a small fraction of the processing power provided by a host CPU, and are simply unable to keep pace with modern NVMe media. HighPoint’s PCIe expansion Storage drives build on the inherit strength of NVMe media and deliver uncompromised transfer performance. The following article discusses several of the key advantages provided by PCIe-based storage solutions. Pushing Storage Boundaries RocketAIC series drives leverage today’s fastest and most reliable NVMe media to deliver unbeatable storage density and performance. Each drive directly hosts up to 8 NVMe SSDs, and are available with up to 60.44TB of storage capacity, and speeds up to 28GB/s; all from a single, compact AIC device! HighPoint RocketAIC NVMe expansion drives incorporate Broadcom’s industry-leading PCIe switch chipsets to reduce latency, optimize signal integrity, and maximize transfer throughput. This unique approach ensures all x16 lanes of available upstream PCIe bandwidth is never wasted; x4 lanes of bandwidth is available to each hosted NVMe SSD, at all times. Superior Queue Depth / Parallelism; Executes a Massive Number of Concurrent Tasks: NVMe storage media can execute a huge number of concurrent tasks. Queue depth, the number of I/O requests that storage device can handle at one time, of NVMe media is measured in the tens of thousands, compared to tens or hundreds for a SAS/SATA device. The difference is staggering: 64K commands with a depth of 64K vs. 32 commands and a depth of 256. NVMe media, even a single SSD in place of the system disk, enables workstations to efficiently process an immense number of tasks simultaneously, without overly stressing system resources. Specialized NVMe storage, such as a HighPoint RocketAIC drive, can be added to boost the performance and response time of critical applications, and further streamlines the capability of the workstation. More than just raw power: NVMe media’s direct to CPU architecture significantly lowers latency, which enables the entire platform to process I/O request in a much more efficient manner. Lowering latency improves response times, enables applications to load faster, and streamlines file transfer. Unsurprisingly, low-latency storage solutions are a boon for performance-hungry applications such as 3D design and rendering, media post-production, AI/ML learning, design & engineering, and scientific simulations. HighPoint’s proven RAID and Storage technology enable our SSD series NVMe RAID AICs and RocketAIC drives further optimize performance by increasing queue depth for concurrent I/O requests, which is ideal for data-intensive applications with massive workloads. Ultra-Compact Form-Factor: NVMe storage is amazingly compact. HighPoint NVMe AIC solutions bring this to an entirely new level. A single HighPoint SSD series NVMe AIC or RocketAIC drive can directly host over 60TB of storage. That’s 60+TB from a single PCIe card! E1.S and M.2 media is hosted directly by the SSD7749x series AIC – you don’t need to concern yourself with drive bays, storage racks and the related power/data cabling accessories. The cards can be easily installed into ordinary desktop workstations, and require no more resources than a modern GPU. Learn More SSD7749E – 8x E1.S PCIe 4.0 x16 NVMe RAID AIC SSD7749M – 8x M.2 PCIe 4.0 x16 NVMe RAID AIC RocketAIC PCIe NVMe Expansion Drives for PC Platforms Bootable NVMe AIC Drives RocketAIC Drive Matrix for Dell & HP Platforms Breaking Storage Barriers with NVMe Technology: Explore HighPoint’s Single-Slot 60TB NVMe Solutions

  • Breaking Down the Tech: How HighPoint PCIe NVMe AIC Storage Drives Boost Mac Pro Performance

    HighPoint RocketAIC PCIe expansion storage drives eliminate data transfer bottlenecks and streamline critical workflows. NVMe storage has many unique characteristics that are well suited for a professional workstation platform, such as Apple’s 2023 and 2019 Mac Pros. The compact form-factor and blazing performance seem the most obvious advantages, but customers should not overlook the inherit strengths of a PCIe-based storage solution. Some of the key factors to consider are outlined below; Low-Latency, Superior Queue Depth, and the direct to CPU hardware architecture. Ultra Low-Latency: NVMe’s advantage over conventional media is more than just brute power. NVMe’s direct to CPU architecture significantly lowers latency, which enables the entire platform to process I/O requests in a much more efficient manner. Lowering latency improves response times, enables applications to load faster, and streamlines file transfer. It is of critical importance for media applications, which the Mac Pro is ideal for. Excessive latency can introduce the risk of error into media streams and interrupt playback, which can slow and complicate the editing process. Superior Queue Depth / Parallelism: NVMe storage media can execute a huge number of concurrent tasks. Queue depth, the number of I/O requests that a storage device can handle at one time. NVMe media is measured in the tens of thousands, compared to tens or hundreds for a SAS/SATA device. The difference is staggering: 64K commands with a depth of 64K vs. 32 commands and a depth of 256. NVMe media, even a single SSD in place of the system disk, enables a Mac Pro to efficiently process an immense number of tasks simultaneously, without ever really tapping into the machine’s potential. Specialized NVMe storage, such as a HighPoint RocketAIC drive, can be added to boost the performance and response time of critical applications, and further streamlines the capability of the workstation. Direct to CPU Hardware Architecture: Unlike conventional storage media, NVMe drives are designed to interface directly with the system’s CPU and GPU’ via the PCIe host bus, essentially bypassing the conventional storage architecture that relies on layers of storage controller and adapters. Key Differences Protocol: NVMe is far more efficient than SAS/SATA, as it was designed specifically for SSD media. Connection Interface: SAS/SATA requires multiple controllers and/or adapters, while NVMe interfaces directly with the PCIe Bus. Latency: In contrast to SAS/SATA storage, NVMe media’s I/O path is short and direct, which significantly reduces latency Parallelism: NVMe handles a huge number of parallel I/O operations, and can better utilize multi-core CPUs environments. HighPoint RocketAIC NVMe expansion drives take this a step further by incorporating Broadcom’s industry-leading PCIe switch chipsets to minimize latency, maximize transfer speeds and optimize signal integrity. The technology is integrated directly into the AIC’s board architecture. This unique approach ensures available PCIe bandwidth is never wasted; x4 lanes of bandwidth is available to each hosted NVMe SSD, at all times. Learn More RocketAIC for Mac Pro Workstations

  • SSD6200 Series AICs: Revolutionizing Virtualization with Native Driver Support and Hardware RAID

    Virtualization solutions, such as an HCI (hyperconverged infrastructure) or VDI (virtual desktop infrastructure) servers utilize unified software applications to replace traditional server hardware. These types of platforms are extremely costly to setup and maintain. The large-scale multi-rack server installations require considerable real estate to house, dedicated IT staff, substantial power draw, and can lead to environmental concerns (heat exhaust or water resources needed for evaporative cooling hardware). As a result, business and organizations, large and small are increasingly adopting HCI and VDI based solutions. HighPoint SSD6200 series NVMe RAID AICs are ideal for such applications. The products were designed to host multiple, bootable virtual drives for both server and client-side services and are natively supported by leading HCI and VDI suites, such as VMware vSAN and ESXi, and Microsoft’s Azure & Hyper-V. The PCIe x8 host interface is universally compatible with any PCIe Gen3 4 or 5 platform, and can deliver up to 7,000MB/s of real-world performance form just a pair of off-the-shelf M.2 SSDs. They are equipped with an impressive array of hardware and software features designed to maximize performance, reliability and serviceability. SSD6200 series AICs provide a feature set that is essential for virtualization solutions Native Driver Support – as embedded devices, SSD6200 series AICs will be automatically recognized by all major HCI, Virtualization and operating system platforms; this includes VMware ESXi, Windows/Windows Server/Hyper-V, any flavor of Linux running kernel v3.10 and later, and FreeBSD/FreeNAS. This equates to plug-and-play installation with streamlined OS updates and patching. Unlike NVMe solutions that require binary drivers, SSD6200 series AICs require no additional downtime, and any hosted SSD/array will remain online and accessible. SSD6200 Series NVMe AICs utilize the Marvel NR2241 controller IC, which is natively supported by VMware platforms. Hardware RAID – SSD6200 series NVMe AICs support RAID 1, 0 and JBOD at the hardware level. In fact, the products allow you to create the arrays using simple switches integrated directly into the AIC; you don’t even need an operating system to get everything up and running. Integrated Boot Security: Mirroring (RAID 1) Protection. Mirroring a bootable SSD will essentially create an automated backup. If the primary disk should fail, the SSD6200 AIC ensures the backup is seamlessly transitioned into its place. This will ensure that the host system remain online, and continue to operate. The redundancy delivered by mirrored configuration is essential for Virtual Machine and hosting solutions, which must remain available for client access on a continual basis. Superior Performance – As NVMe storage solutions, SSD6200 series AICs deliver a level of performance and responsiveness far superior to that of an SAS/SATA SSD. The minimized latency, massive queue depth and dedicated PCIe bandwidth for each SSD work to minimize boot times and can greatly enhance the overall performance of any system disk they are hosting. A sustained transfer speed of over 7000MB/s combined with random IOPs measures in the 100’s of thousands to millions are particularly well suited for a HCI or VDI workflow which must cater to clients with a wide-range of application and use requirements. Ultra-Compact Form Factor – easy to install in compact tower servers and 1U/2U rackmounts. The SSD6202 models in particular, are available in a Half-Height/Half-Length form factor, and directly host the NVMe media. No drive bays or supplemental cooling/power/or cabling related hardware is required. Integrated LED’s, Audible Alarm and OOB Port: These features are ideal for field-service workflows and enable even inexperienced administrators to keep tabs on hosted SSD and arrays with a simple glance. The color-coded LEDs will instantly convey the status of the media (Red – fail or alert / Green – normal/optimal). The OOB port (out-of-band) provides a secure connection to the controller to troubleshoot, diagnose and service outside of an OS. Learn More SSD6200 Series NVMe Hardware RAID AICs RocketAIC 6202 Series PCIe NVMe Expansion Drives RocketAIC 6204 Series PCIe NVMe Expansion Drives

  • Exploring the Powerhouse: A Deep Dive into PCIe Lane Values of PCIe M.2 NVMe Cards

    Anyone remotely familiar with PCIe technology will recognize the terms “x1”, “x4”, “x8” and “x16”. They are typically part of a PCIe device’s name or description. The “x” value represents the device’s lane count. In many cases, this numbers represent the PCIe cards performance capabilities (electrical lanes or bandwidth). However, it is best not to take the number at face value. In some cases, “x#” can reflect the card’s physical size or PCIe slot requirement (known as PCIe length or mechanical lanes). This value may or may not correspond with its actual performance capability. Determining a PCIe device’s true electrical and mechanical lane rating is of critical importance when evaluating a high-performance PCIe card, especially an NVMe device, as it (in part) determines how well the SSDs will be able to perform. “x16” is paramount, but you will need to make sure this number isn’t just about the card’s physical requirement, and provided x16 lanes of bandwidth is available, that the card in question can make the most of it. This article attempts to shine a spotlight on the terminology associated with PCIe lanes, and examine the differences between electrical and mechanical lanes. Deciphering PCIe Terminology: Examining the difference between Electrical and Mechanical Lanes How do you determine the card’s true PCIe bandwidth capability and lane speed? As mentioned previously, “x#” of lanes doesn’t always translate directly into the card’s throughput. This article examines “x” lanes terminology; Electrical lanes (the actual PCIe host bandwidth), Mechanical Lanes (the physical size requirement of the AIC), and how the lane count influences storage performance. What is meant by Mechanical and Electrical PCIe lanes? First, let’s examine the following product title: 4x M.2 NVMe SSD to PCIe 3.0 x8 / x16 Adapter Card The first part, “4x M.2 NVMe SSD” suggests the card supports up to four M.2 NVMe SSDs – pretty self-explanatory. The second part of the description “PCIe 3.0 x8 /x16” is a bit more complicated – this refers to the card’s PCIe lanes. In the above example, “PCIe 3.0 x8 / x16” can have two meanings: 1) Mechanical Lane requirement: the type of PCIe slot required by the card (physical connection, to the computer’s motherboard). “x8 / x16” suggests the card can be physically installed into a PCIe slot with x8 or x16 mechanical lanes. This means the card has a mechanical lane rating of x8. How did you determine this, you may ask? To explain, the general rule of thumb for a PCIe connector is that they are upwards compatible; that is, a PCIe card can be physically installed into any PCIe slot with the same “x#” rating, or higher. For example: · a card with x4 mechanical lanes can be physically installed into a x4, x8 or x16 slot · a card rated for x8 mechanical lanes can be installed into a x8 or x16 slot · however, a card with a mechanical lane rating of 16 can only be installed into PCIe slot with x16 mechanical lanes. There are exceptions, of course. Some computers/devices have PCIe slots that are classified as “open-end”, “slotted” or “notched” – this means there is a physical indentation in the slot that enables cards with a high x# mechanical lane rating to be installed. This PCIe x4 slot is “open-ended”. The slot, or cut on the right-hand side of the slot enables it to accept larger PCI cards. 2) Electrical Lanes (PCIe bandwidth): the card’s performance level. This corresponds with the PCIe card’s “Upstream & Downstream Bandwidth” capabilities. In this particular example, we can deduce that the card is rated at x8 electrically. Why you ask? Recall our rule for the Mechanical Lane requirement; x8 is the maximum the card can possibly deliver. In order to provide x16 lanes of bandwidth, the card would have to be equipped with a mechanical x16 connector. And, a mechanical x16 connector is simply too large to insert into a standard slot rated at x8 mechanically. As a general rule, a PCIe device can only provide x# of electrical lanes equal to or lesser than its “x#” mechanical lane rating. For example, a PCIe card rated at x16 mechanically, could potentially provide x16, x8, x4 or even x1 lanes of electrical bandwidth. However, a card with an x4 mechanical rating could only provide x4 or x1 lanes of electrical bandwidth. How do PCIe lanes influence NVMe storage performance? PCIe lanes, or PCIe bandwidth, is really referring to the AIC’s electrical lanes (the actual PCIe lane speed). As a general rule, the higher this value (from x1 to x16) the better – “more lanes” essentially means “more performance”. However, the reasons go far beyond an increasing number. Transfer Speed (Throughput): The most obvious advantage that more lanes provide is the larger performance threshold. A PCIe device rated at x16 provides 16-times the transfer bandwidth as one with an x1 rating. Naturally, x16 is optimal; it provides 16GB/s of bandwidth for PCIe Gen3, and 32 for Gen4. This translates into a real world 14,000MB/s and 28,000MB/s, respectively. This is obviously useful for large SSD configurations, as a single Gen3 SSD can deliver approximately 3,500MB/s, while a Gen4 doubles this to 7,000MB/s. The x16 threshold enables up to four NVMe SSDs to operate at full speed, concurrently. Concurrent I/O (Parallelism): The higher the threshold, the more simultaneous I/O is possible. NVMe SSDs can execute a huge number of concurrent tasks, as their Queue depth (the number of I/O requests that storage device can handle at one time) is far superior to that of an SAS/SATA SSD (tens of thousands, compared to tens or hundreds). The advantage NVMe has over convention storage is eye opening; 64K commands with a depth of 64K vs. 32 commands and a depth of 256. Unsurprisingly, a larger performance threshold (PCIe lane bandwidth) streamlines this process, as a larger number of simultaneous data streams can be sustained. Minimizes Latency: More bandwidth lowers latency. In other words, a wider lane speeds up the follow of traffic, and helps eliminate the risk of a bottleneck. Minimizing latency improves performance on multiple levels. It shortens response times, loads software applications faster, and streamlines file transfer (whether it be read or write). Avoid or Eliminate Performance Bottlenecks: The more lanes that are available to the NVMe storage media, the faster and more efficiently it can operate. NVMe media performs at its best when each SSD has access to x4 lanes. As such, reducing lane count or PCIe generation can seriously degrade performance. An NVMe AIC with insufficient bandwidth will be unable to allocate x4 lanes to each SSD; they will be forced to operate at lower speeds (x2 or even x1). Scalability: The benefits of a higher lane count isn’t exclusive to the AIC in question. A motherboard or computing platform with a larger number of PCIe lanes (both electrical and mechanical) will be able to support faster and/or more NVMe AICs, and enable each hosted SSD perform at optimal speeds. Platforms with healthy lane counts provide more flexibility to an administrator to expand or upgrade NVMe storage to keep pace with critical applications. Conclusion: x16 is ideal, but make sure that bandwidth isn’t going to waste. By now it should be clear how Electrical and Mechanical lanes are related, but different. An electrical lane rating of x16 is what you want to shoot for when evaluating PCIe NVMe AICs; it provides the maximum transfer bandwidth possible for a single PCIe slot, and will help maximize the performance potential of any NVMe configuration. And of course, an x16 bandwidth requires a card slotted for an x16 mechanical slot. However, it’s important to remember that the raw numbers don’t tell the whole story. You will want an NVMe AIC that can allocate the maximum number of lanes to each hosted SSD. HighPoint NVMe AICs and PCIe AIC Drives do exactly that – x4 lanes per SSD to ensure optimal performance. Want to how this is done? Learn More Exploring the Powerhouse: A Deep Dive into PCIe Switch Chipsets for PCIe Gen3-M.2 NVMe Cards HighPoint Gen3 NVMe AICs HighPoint RocketAIC Gen3 PCIe Expansion Drives for Mac Pro HighPoint RocketAIC Gen3 PCIe Expansion Drives for Dell & HP Systems

bottom of page