Compare between FIO LSI and VD
Upload on 2014/3/5 by louis liu
March 5, 2014 Architect, hardware No comments architecture
Compare between FIO LSI and VD
Upload on 2014/3/5 by louis liu
The next generation of IBM’s X-series servers will be able to accommodate solid-state Flash drives clipped into their DIMM memory slots, potentially improving the response times of fast-paced enterprise applications.
On Thursday, IBM unveiled the Series 6 generation of its System X x86-based servers. In addition to the novel reuse of DIMM slots, the X6 architecture will also let customers upgrade them to a new generation of processors or memory without swapping in a new motherboard.
Diablo Technologies, a memory technology company, developed Memory Channel Storage (MCS) that enables flash on a DIMM module to be accessed by the CPU, instead of using the SATA bus as other DIMM form-factor SSD products have. Using a host-level driver and an ASIC on the DIMM, it creates a special memory storage layer in flash through which the CPU actually moves data from the RAM memory space. It also requires a minor modification in the server BIOS to be supported by the CPU, something that three OEMs have currently completed. Each DIMM flash module has 16 separate data channels, which are independently addressable. This enables parallel data writes by the driver, improving performance over the DMA process used by PCIe-based solutions. Specs for ULLtraDIMM are showing 5 microsecond latencies for these devices, an order of magnitude better than typical PCIe flash products. This architecture also enables up to 63 ULLtraDIMM modules to be aggregated creating 25TB of flash capacity and >9M IOPS in a single server.
Ref: IBM X series servers now pack Flash into speedy DIMM slots
IBM Beefs Up Enterprise X-Architecture With Flash, Modular Design
Heating Up Storage Performance
How to Make Flash Accessible on the Memory Bus
Memory Channel Storage™
ULLtraDIMM: combining SSD and DRAM for the enterprise
December 12, 2013 Architect, hardware No comments hardware
November 18, 2013 Architect, hardware, IDC, software No comments architecture
UPDATE 12.2: you can download complete PDF from database design of YHD
September 5, 2013 Architect, hardware, storage No comments architecture
FusionIO推出的基于共享级别的ioMemory加速方案,此文简要概述方案的一些粗略架构。ION可配选IB或者40GB以太网连接,同时支持FCoIB,FCoE,EoIB,RDMA协议。
基于FIO与HP的特殊关系,下面的图主要来自ION Accelerator on HP DL380
注意这里ION对于based server的机器是有一定要求的,对于1U的机器由于PCI插槽的限制,导致IO performace的下降是必然的。
这里的ION推出,类似模拟storage的controller机头概念,使用自己的software配合IB从而达到模拟一台普通的Server成为”存储”,概念类似于QGUARD,有兴趣的朋友可以去研究下。针对这种shared概念,一般database的应用必然首选ORACLE RAC,并且对于开源的解决方案,我相信也不会有人去花大价钱来买一堆付费的软件+硬件来模拟存储吧:) 是否可以挑战下exadata(虽然exadata针对的场景不一样)还是值得期待的。
底层通过server模拟storage,仍然使用FC协议。
目前ION并不支持cluster架构(多台server模拟存储机柜)只是简单的一对一的HA架构(类似存储复制)
在RAC架构中,与传统方案结构类似,极大的增强了IO能力(可谓超级能力的一个application cluster?),同样的解决方案有XtremSF,Flash Accel 。
具体参考 FIO ION
August 26, 2013 Architect, hardware, linux, system No comments 11g new feature
By ask Surachart for help
Test: Flash Cache on 11gR2 + RHEL
A Flash Cache (11gR2) is supported by OEL or Solaris. If Want To use RHEL(Example: RHEL 5.3)
Patched 8974084 before
SQL> startup ORA-00439: feature not enabled: Server Flash Cache ORA-01078: failure in processing system parameters TEST: ***use "strace" commnad to trace system & signals*** $ strace -o /tmp/file01.txt -f sqlplus '/ as sysdba' <<EOF startup EOF Find 2 points: 1. about /etc/*-release files. 3884 open("/etc/enterprise-release", O_RDONLY) = 8 3884 read(8, "Enterprise Linux Enterprise Linu"..., 255) = 64 2. about "rpm" cammand 32278 execve("/bin/rpm", ["/bin/rpm", "-qi", "--info", "enterprise-release"], [/* 25 vars */] <unfinished ...> Next, it greps for “66ced3de1e5e0159” from the following output… try to check on Enterprise Linux. $ rpm -qi --info "enterprise-release" Name : enterprise-release Relocations: (not relocatable) Version : 5 Vendor: Oracle USA Release : 0.0.17 Build Date: Wed 21 Jan 2009 06:00:33 PM PST Install Date: Mon 11 May 2009 11:19:45 AM PDT Build Host: ca-build10.us.oracle.com Group : System Environment/Base Source RPM: enterprise-release-5-0.0.17.src.rpm Size : 59030 License: GPL Signature : DSA/SHA1, Wed 21 Jan 2009 06:56:48 PM PST, Key ID 66ced3de1e5e0159 Summary : Enterprise Linux release file Description : System release and information files Name : enterprise-release Relocations: (not relocatable) Version : 5 Vendor: Oracle USA Release : 0.0.17 Build Date: Wed 21 Jan 2009 06:00:33 PM PST Install Date: Mon 11 May 2009 11:19:45 AM PDT Build Host: ca-build10.us.oracle.com Group : System Environment/Base Source RPM: enterprise-release-5-0.0.17.src.rpm Size : 59030 License: GPL Signature : DSA/SHA1, Wed 21 Jan 2009 06:56:48 PM PST, Key ID 66ced3de1e5e0159 Summary : Enterprise Linux release file Description : System release and information files Fixed: 1. FAKE *-release file (don't forgot backup before) - Modify /etc/redhat-release + /etc/enterprise-release files. $ cat /etc/redhat-release Enterprise Linux Enterprise Linux Server release 5.3 (Carthage) $ cat /etc/enterprise-release Enterprise Linux Enterprise Linux Server release 5.3 (Carthage) 2. FAKE rpm to check "enterprise-release" package. - Modify /bin/rpm # mv /bin/rpm /bin/rpm.bin # vi /bin/rpm #!/bin/sh if [ "$3" = "enterprise-release" ] then echo 66ced3de1e5e0159 else exec /bin/rpm.bin "$*" fi # chmod 755 /bin/rpm Try... Again -> startup database. SQL> startup
June 4, 2013 Architect, hardware No comments 11g new feature
April 2, 2013 Architect, hardware 2 comments test
Testing three PCIE cards’s performance for all of scenes using fio
Reference:fio parameter setting
September 17, 2012 Architect, hardware No comments storage
在最近一次的核心系统迁移中.NetAPP存储发生了意想不到的情况,在前端负载不是很高的情况下 存储CPU使用了超过了55%,并且读竟然达到了1GB/s
在无法获取1GB数据产生源的情况下,项目被迫回滚,导致50多人白忙活了一夜。最后在netapp的check 中发现竟然是一次存储的自检行为导致”NetAPP DISK SCRUB” 默认在周日凌晨1点启动持续6个小时,竟然跟我们项目冲突了,下面做一个总结:
当时的情况 A B 两个机头负载同时飙升到60% read均达到了1GB+/s 并且A机头的负载>B 机头 这是由于这套系统使用了B机头作为主机头,NETAPP在自检的过程中采取了dynamic的方式自动降低了有数据交换的B机头的扫描负载
It’s a well-known fact in the storage world that firmware bugs (and sometimes hardware and data path problems) can cause silent data corruption; the data that ends up on disk is not the data that was sent down the pipe. To protect against this, when Data ONTAP writes data to disk, it creates a checksum for each 4kB block that is stored as part of the block’s metadata. When data is later read from disk, the checksum is recalculated and compared to the stored checksum. If they are different, the requested data is recreated from parity. In addition, the data from parity is rewritten to the original 4kB block, then read back to verify its accuracy.
To ensure the accuracy of archive data that may remain on disk for long periods without being read, NetApp offers the configurable RAID scrub feature. A scrub can be configured to run when the system is idle and reads every 4kB block on disk, triggering the checksum mechanism to identify and correct hidden corruption or media errors that may occur over time. This proactive diagnostic software promotes self-healing and general drive maintenance.
To NetApp, rule number 1 is to protect our customer data at all costs. Protection against firmware-induced silent data corruption is an example of NetApp’s continuing focus on developing innovative storage resiliency features to ensure the highest level of data integrity.
How you schedule automatic RAID-level scrubs
By default, Data ONTAP performs a weekly RAID-level scrub starting on Sunday at 1:00 a.m. for a duration of six hours. You can change the start time and duration of the weekly scrub, add more automatic scrubs, or disable the automatic scrub.
To schedule an automatic RAID-level scrub, you use the raid.scrub.schedule option.
To change the duration of automatic RAID-level scrubbing without changing the start time, you use the raid.scrub.duration option,specifying the number of minutes you want automatic RAID-level scrubs to run. If you set this option to -1, all automatic RAID-level scrubs run to completion.
Note: If you specify a duration using the raid.scrub.schedule option, that value overrides the value you specify with this option.
To enable or disable automatic RAID-level scrubbing, you use the raid.scrub.enable option.
Scheduling example
The following command schedules two weekly RAID scrubs. The first scrub is for 240 minutes (four hours) every Tuesday starting at 2 a.m. The second scrub is for eight hours every Saturday starting at 10 p.m.
options raid.scrub.schedule 240m@tue@2,8h@sat@22
Verification example
The following command displays your current RAID-level automatic scrub schedule. If you are using the default schedule, nothing is displayed.
options raid.scrub.schedule
Reverting to the default schedule example
The following command reverts your automatic RAID-level scrub schedule to the default (Sunday at 1:00 am, for six hours):
options raid.scrub.schedule ” ”