MD Raid General Discussion

A micro conference proposal for general MD raid discussion.

Why people should be here

In the last years, we have many development activities in md raid, e.g.

  • Raid5 cache
    • This is an effort to close raid5 writing hole (data lost when system crash happens), the original idea was from Ext4 file system journal (this is why its earlier name was raid5-journal). Now raid5 cache can be used to improve performance for writing burst and improve tolerance of writing hole.
  • Partial parity log
    • This is another effort to close raid5 writing hole. In degraded state, there is no way to recalculate parity, because one of the disks is missing. Raid5 partial parity log allows recalculating the parity, it does not protect on-flight data, just make data consistency on disks.
  • Raid1 clustering
    • This is an in kernel distributed data mirroring implementation, mirror devices of the raid1 can be from different servers far from each other in different data center. Users may have distribted clustering data duplication on md raid1.
  • Sysfs interface for mdadm
    • Currently mdadm tool mixes ioctl and sysfs interface together to communicate with kernel code. And kernel code has to serve for both interface with different code paths. Now mdadm developers want to unify the interface to a set of sysfs files, it requires effort from both kernel space and user space. The benefit is, we can have a unified code path in both kernel space and user space, which is more clear when communication happens between mdadm and md kernel code.
  • Performance improvement for NVMe devices
    • Lockless I/O submission on RAID1, DISCARD improvement on RAID0, which improvements performance on fast NVMe SSDs.
  • Device-mapper (dm-raid target) interface

We need to sit together to discuss development road map, kernel and user space tool collaboration, and how to work with development of other subsystems.

It is also open to other developers to join, all constructive comments are warmly welcome.

Key Attendees (tentative)

  • Neil Brown (SUSE)
  • Shaohua Li (Facebook) - confirmed
  • Jes Sorensen (Facebook) - confirmed
  • Hannes Reinecke (SUSE)
  • Coly Li (SUSE)
  • Guoqing Jiang (SUSE)
  • Heinz Mauelshagen (Redhat) - confirmed

Potential Attendees (tentative)

  • Song Liu (Facebook)
  • Pawel Baldysiak (Intel)
  • Artur Paszkiewicz (Intel)
  • Lijun Pan (DellEMC)

Key Topics for Discussion (tentative)

Here are discussion topics in mind, feel free to add more topics into,

  • Road map of Partial Parity Log development
  • Road map of clustered raid development
  • Sysfs interface improvement for mdadm and MD Raid
  • Mdadm test suite
  • Sync up between raid1 and raid10, there are some features which is only available in raid1 such as new I/O barrier and Write behind (maybe more).

Note: The final schedule will be posted on the linuxplumbersconf.org website on the Schedule page. We suggest to provide a slide file with 1~2pages out lines.

Proposed Schedule (tentative)

  • Roadmap sharing for:
    • raid5 partial parity log
    • md clustering
    • sysfs interface improvement
    • mdadm test suite
  • Problem disucssion
    • Kernel space and user space collaboration for sysfs interface improvement
    • raid1 & raid10 sync up

Notes from the sessions will be recorded in text mode and shared by email or some other website. We will decide a people to record before each session start, volunteers are warmly welcome.

Contact

Runner: Coly Li colyli@suse.de

 
2017/md_raid_general_discussion.txt · Last modified: 2017/09/13 05:55 by 12.145.98.253
 
Except where otherwise noted, content on this wiki is licensed under the following license:CC Attribution-Noncommercial-Share Alike 3.0 Unported
Recent changes RSS feed Donate Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki