بنية نظام تداول الكمون المنخفض


تجارة الأرض الهندسة المعمارية.


اللغات المتوفرة.


خيارات التنزيل.


عرض مع أدوبي ريدر على مجموعة متنوعة من الأجهزة.


جدول المحتويات.


تجارة الأرض الهندسة المعمارية.


نظرة عامة التنفيذية.


زيادة المنافسة، وارتفاع حجم بيانات السوق، والطلبات التنظيمية الجديدة هي بعض القوى الدافعة وراء التغيرات في الصناعة. تحاول الشركات الحفاظ على قدرتها التنافسية من خلال تغيير استراتيجياتها التجارية باستمرار وزيادة سرعة التداول.


يجب أن تتضمن الهندسة المعمارية القابلة للحياة أحدث التقنيات من مجالات الشبكات والتطبيقات. يجب أن تكون وحدات لتوفير مسار يمكن إدارته لتطوير كل مكون مع الحد الأدنى من تعطيل للنظام العام. ولذلك فإن العمارة المقترحة من قبل هذه الورقة تستند إلى إطار الخدمات. نحن نفحص الخدمات مثل الترا-- منخفضة الكمون الرسائل، رصد الكمون، البث المتعدد، والحوسبة، والتخزين والبيانات والتطبيق الظاهري، ومرونة التداول، والتداول التنقل، والعميل رقيقة.


يجب أن يتم بناء الحل للمتطلبات المعقدة من منصة التداول من الجيل التالي مع عقلية شاملة، عبور حدود الصوامع التقليدية مثل الأعمال والتكنولوجيا أو التطبيقات والشبكات.


الهدف الرئيسي لهذه الوثيقة هو توفير مبادئ توجيهية لبناء منصة تداول الكمون منخفضة للغاية مع تحسين الإنتاجية الخام ومعدل الرسالة لكل من بيانات السوق وأوامر التداول فيكس.


ولتحقيق ذلك، نقترح تقنيات الحد من الكمون التالية:


• سرعة عالية الاتصال بين إنفينيباند أو 10 جيجابايت في الثانية الاتصال لمجموعة التداول.


• ناقل الرسائل عالية السرعة.


• تسريع التطبيق عبر ردما دون إعادة تطبيق التطبيق.


• في الوقت الحقيقي رصد الكمون وإعادة توجيه حركة التداول إلى المسار مع الحد الأدنى من الكمون.


اتجاهات الصناعة والتحديات.


الجيل التالي من أبنية التداول يجب أن تستجيب لزيادة الطلب على السرعة والحجم والكفاءة. على سبيل المثال، من المتوقع أن يتضاعف حجم بيانات سوق الخيارات بعد إدخال خيارات التداول بيني في عام 2007. وهناك أيضا متطلبات تنظيمية لتنفيذ أفضل، والتي تتطلب معالجة الأسعار التحديثات بمعدلات التي تقترب من 1M مسغ / ثانية. للتبادلات. كما أنها تتطلب رؤية في نضارة البيانات وإثبات أن العميل حصلت على أفضل تنفيذ ممكن.


على المدى القصير، سرعة التداول والابتكار هي عوامل التفريق الرئيسية. يتم التعامل مع عدد متزايد من الصفقات من خلال تطبيقات التداول الحسابية وضعت في أقرب وقت ممكن إلى مكان تنفيذ التجارة. وهناك تحد مع هذه & كوت؛ الصندوق الأسود & كوت؛ ومحركات التداول هي أنها تزيد من حجم الزيادة بإصدار أوامر فقط لإلغائها وإعادة تقديمها. سبب هذا السلوك هو عدم وجود رؤية في أي مكان يقدم أفضل تنفيذ. التاجر البشري هو الآن & كوت؛ مهندس مالي، & كوت؛ a & كوت؛ كوانت & كوت؛ (المحلل الكمي) مع مهارات البرمجة، الذين يمكن ضبط نماذج التداول على الطاير. وتطور الشرکات أدوات مالیة جدیدة مثل المشتقات المناخیة أو الصفقات الصناعیة عبر الأصول، وتحتاج إلی نشر التطبیقات الجدیدة بسرعة وبطریقة قابلة للتطویر.


على المدى الطويل، ينبغي أن يأتي التمايز التنافسي من التحليل وليس المعرفة فقط. تجار نجمة الغد يتحملون المخاطر، ويحققون رؤية العميل الحقيقية، ويضربون باستمرار السوق (المصدر عب: www-935.ibm/services/us/imc/pdf/ge510-6270-trader. pdf).


وقد كانت مرونة الأعمال أحد الشواغل الرئيسية للشركات التجارية منذ 11 سبتمبر 2001. وتتراوح الحلول في هذا المجال من مراكز بيانات زائدة تقع في مناطق جغرافية مختلفة ومتصلة بأماكن تجارية متعددة لحلول التاجر الافتراضية التي تقدم لمتداولي الطاقة معظم وظائف أرضية التداول في مكان بعيد.


صناعة الخدمات المالية هي واحدة من الأكثر تطلبا من حيث متطلبات تكنولوجيا المعلومات. وتشهد هذه الصناعة تحولا معماريا نحو العمارة الموجهة نحو الخدمات (سوا)، وخدمات الويب، والمحاكاة الافتراضية لموارد تكنولوجيا المعلومات. سوا يستفيد من الزيادة في سرعة الشبكة لتمكين ديناميكية ملزمة والافتراضية مكونات البرمجيات. وهذا يسمح بإنشاء تطبيقات جديدة دون أن تفقد الاستثمار في النظم والبنية التحتية القائمة. وللمفهوم القدرة على إحداث ثورة في الطريقة التي يتم بها التكامل، مما يتيح تخفيضات كبيرة في تعقيد وتكلفة هذا التكامل (جيغاسباسز / دونلواد / MerrilLynchGigaSpacesWP. pdf).


وهناك اتجاه آخر يتمثل في توطيد الخوادم في مزارع ملقمات مراكز البيانات، في حين أن مكاتب التاجر لديها ملحقات كفم وعملاء رقيقة جدا (مثل حلول سونراي و هب للنصل). تتيح شبكات منطقة المترو عالية السرعة لبيانات السوق أن تكون متعددة الإرسال بين مواقع مختلفة، مما يتيح التمثيل الافتراضي لقاعة التداول.


الهندسة المعمارية عالية المستوى.


ويصور الشكل 1 البنية عالية المستوى لبيئة تجارية. يقع مصنع القراد ومحركات التداول الخوارزمية في مجموعة التداول عالية الأداء في مركز بيانات الشركة أو في البورصة. يقع التجار البشر في منطقة تطبيقات المستخدم النهائي.


وظيفيا هناك نوعان من مكونات التطبيق في بيئة تجارية المؤسسة والناشرين والمشتركين. يوفر ناقل الرسائل مسار الاتصال بين الناشرين والمشتركين.


هناك نوعان من حركة المرور الخاصة ببيئة التداول:


• بيانات السوق - يحمل معلومات التسعير للأدوات المالية، والأخبار، وغيرها من المعلومات ذات القيمة المضافة مثل التحليلات. وهو أحادي الاتجاه والكمون جدا حساسة، تسليم عادة عبر أودب الإرسال المتعدد. ويقاس في التحديثات / ثانية. و مبس. وتتدفق بيانات السوق من تغذية واحدة أو عدة خلاصات خارجية، تأتي من مزودي بيانات السوق مثل البورصات ومجمعات البيانات و إنز. كل مزود لديه شكل بيانات السوق الخاصة بها. يتم تلقي البيانات من قبل معالجات الأعلاف، والتطبيقات المتخصصة التي تطبيع وتنظيف البيانات ومن ثم إرسالها إلى المستهلكين البيانات، مثل محركات التسعير، وتطبيقات التداول حسابي، أو التجار البشر. كما تقوم الشركات البيعية بإرسال بيانات السوق إلى عملائها وشركات الشراء مثل صناديق الاستثمار وصناديق التحوط ومديري الأصول الأخرى. وقد تختار بعض شركات الشراء الحصول على تغذية مباشرة من البورصات، مما يقلل من الكمون.


الشكل 1 تجارة العمارة لجانب شراء / بيع شركة جانبية.


لا يوجد معيار صناعي لتنسيقات بيانات السوق. كل تبادل لها شكل الملكية. مزودي المحتوى المالي مثل رويترز وبلومبرغ تجميع مصادر مختلفة من بيانات السوق، وتطبيعه، وإضافة الأخبار أو التحليلات. ومن الأمثلة على الأعلاف الموحدة هي ردف (خلاصة بيانات رويترز)، و روف (ريوترز وير فورمات)، وبيانات بلومبرغ للخدمات المهنية.


ولتقديم بيانات أقل عن بيانات وقت الاستجابة، قام كل من البائعين بإصدار خلاصات بيانات السوق في الوقت الفعلي والتي تكون أقل معالجة وتتميز بتحليلات أقل:


- بلومبرغ B - الأنابيب مع B - الأنابيب، بلومبرغ دي الأزواج تغذية بيانات السوق الخاصة بهم من منصة التوزيع لأن محطة بلومبرغ غير مطلوب للحصول على B - الأنابيب. وقد أعلنت وومبات ورويترز أعلاف معالجات دعم B - الأنابيب.


قد تقرر الشركة تلقي الخلاصات مباشرة من التبادل لتقليل وقت الاستجابة. يمكن أن تكون المكاسب في سرعة الإرسال بين 150 ميلي ثانية إلى 500 ميلي ثانية. هذه الأعلاف هي أكثر تعقيدا وأكثر تكلفة، وعلى الشركة أن تبني وتحافظ على النباتات الخاصة بهم شريط (فينانسيتيش / مميزة / showArticle. jhtml؟ أرتيكليد = 60404306).


• أوامر التداول - هذا النوع من حركة المرور يحمل الصفقات الفعلية. فمن ثنائية الاتجاه والكمون جدا حساسة. ويقاس في الرسائل / ثانية. و مبس. أوامر تنشأ من جانب شراء أو بيع الجانب شركة ويتم إرسالها إلى أماكن التداول مثل إكسهانج أو إن للتنفيذ. الشكل الأكثر شيوعا لنقل النظام هو فيكس (فينانسيال إنفورماتيون إكسهانج-فيكسبروتوكول /). وتسمى التطبيقات التي تعالج رسائل فيكس محركات فيكس وهي واجهة مع أنظمة إدارة النظام (أومز).


يسمى التحسين إلى فيكس فاست (فيكس أدابتد فور سترامينغ)، والذي يستخدم مخطط ضغط لتقليل طول الرسالة، وفي الواقع، تقليل زمن الاستجابة. ويستهدف فاست أكثر إلى تسليم بيانات السوق ولها القدرة على أن تصبح معيارا. فاست يمكن أن تستخدم أيضا كمخطط ضغط لتنسيقات بيانات السوق الملكية.


للحد من وقت الاستجابة، قد تختار الشركات إنشاء الوصول المباشر إلى الأسواق (دما).


دما هي العملية الآلية لتوجيه نظام الأوراق المالية مباشرة إلى مكان التنفيذ، وبالتالي تجنب تدخل طرف ثالث (تاورغروب / ريزارتش / كونتنت / glossary. jsp؟ بادج = 1 & أمب؛ غلوسارييد = 383). دما يتطلب اتصال مباشر إلى مكان التنفيذ.


ناقل الرسائل هو البرمجيات الوسيطة من البائعين مثل تيبكو، 29West، رويترز رمدس، أو منصة مفتوحة المصدر مثل أمكب. يستخدم ناقل الرسائل آلية موثوقة لتقديم الرسائل. ويمكن أن يتم النقل عبر تكب / إب (تيبكومز، 29West، رمدس، و أمكب) أو أودب / الإرسال المتعدد (تيبكورف، 29West، و رمدس). أحد المفاهيم المهمة في توزيع الرسائل هو & كوت؛ ساحة المشاركات، & كوت؛ وهي مجموعة فرعية من بيانات السوق التي تحددها معايير مثل رمز المؤشر أو الصناعة أو سلة معينة من الأدوات المالية. ينضم المشتركون إلى مجموعات الموضوعات التي تم تعيينها لموضوع واحد أو عدة مواضيع فرعية من أجل الحصول على المعلومات ذات الصلة فقط. في الماضي، تلقى جميع التجار جميع بيانات السوق. في الأحجام الحالية من حركة المرور، وهذا سيكون دون المستوى الأمثل.


تلعب الشبكة دورا حاسما في البيئة التجارية. يتم نقل بيانات السوق إلى الطابق التجاري حيث يقع التجار البشري عبر شبكة عالية السرعة الحرم الجامعي أو مترو منطقة. كما أن توافرها العالي ووقت الاستجابة المنخفض، فضلا عن الإنتاجية العالية، هما أهم المقاييس.


بيئة التداول عالية الأداء لديها معظم مكوناتها في مركز خدمة مركز البيانات. لتقليل وقت الاستجابة، تحتاج محركات التداول الخوارزمية إلى تحديد موقعها بالقرب من معالجات التغذية ومحركات فيكس وأنظمة إدارة الطلبات. نموذج النشر البديل لديه أنظمة التداول الحسابية الموجودة في تبادل أو مزود خدمة مع اتصال سريع لتبادل متعددة.


نماذج النشر.


هناك نوعان من نماذج النشر لمنصة تداول عالية الأداء. قد تختار الشركات أن يكون لها مزيج من الاثنين:


• مركز بيانات الشركة التجارية) الشكل 2 (- هذا هو النموذج التقليدي، حيث يتم تطوير منصة تداول كاملة والحفاظ عليها من قبل الشركة مع روابط اتصال لجميع أماكن التداول. الكمون يختلف مع سرعة الروابط وعدد من القفزات بين الشركة والأماكن.


الشكل 2 نموذج النشر التقليدي.


• املوقع املشرتك يف مكان التداول) البورصات، مقدمي اخلدمات املالية) فسب (() السكل 3


تقوم الشركة التجارية بنشر منصة التداول الآلية في أقرب وقت ممكن إلى أماكن التنفيذ لتقليل وقت الاستجابة.


الشكل 3 نموذج النشر المستضاف.


خدمات المنحى تجارة العمارة.


نحن نقترح إطارا موجها نحو الخدمات لبناء بنية تجارية من الجيل التالي. ويوفر هذا النهج إطارا مفاهيميا ومسارا للتنفيذ يستند إلى نمطية التقليل من التبعيات وتقليلها إلى أدنى حد.


ويوفر هذا الإطار للمنشآت منهجية للقيام بما يلي:


• تقييم حالتها الراهنة من حيث الخدمات.


• إعطاء األولوية للخدمات استنادا إلى قيمتها بالنسبة لألعمال.


• تطوير منصة التداول إلى الدولة المطلوب باستخدام نهج وحدات.


تعتمد بنية التداول عالية األداء على الخدمات التالية، كما هو محدد في إطار بنية الخدمات الممثلة في الشكل 4.


الشكل 4 إطار عمل الخدمة للتداول عالي الأداء.


الجدول 1 وصف الخدمات وتكنولوجياتها.


الترابط منخفضة جدا الرسائل.


الأجهزة والأجهزة، وكلاء البرمجيات، ووحدات التوجيه.


نظام التشغيل و I / O الافتراضية، الوصول عن بعد الوصول المباشر (ردما)، محركات تفب حمولة (تو)


الوسيطة التي تتوازى معالجة التطبيق.


الوسيطة التي تسرع الوصول إلى البيانات للتطبيقات، على سبيل المثال، التخزين المؤقت في الذاكرة.


النسخ المتماثل متعدد الأجهزة بمساعدة الشبكة من خلال الشبكة؛ متعدد الطبقات 2 والطبقة 3 التحسينات.


المحاكاة الافتراضية لأجهزة التخزين (فسانز)، النسخ المتماثل للبيانات، النسخ الاحتياطي عن بعد، والمحاكاة الافتراضية للملف.


مرونة التداول والتنقل.


موازنة تحميل المحلية وموقع وارتفاع شبكات الحرم الجامعي توافر.


منطقة واسعة خدمات التطبيقات.


تسريع التطبيقات عبر اتصال وان للتجار المقيمين خارج الحرم الجامعي.


خدمة العميل رقيقة.


إزالة اقتران موارد الحوسبة من المطاريف التي تواجه المستعمل النهائي.


خدمة التراسل المنخفض جدا.


يتم توفير هذه الخدمة من قبل حافلة الرسائل، وهو نظام البرمجيات التي يحل مشكلة ربط العديد من التطبيقات لكثير. ويتكون النظام من:


• مجموعة من مخططات الرسائل المحددة مسبقا.


• مجموعة من رسائل الأوامر المشتركة.


• بنية تحتية مشتركة للتطبيق لإرسال الرسائل إلى المستلمين. يمكن أن تقوم البنية التحتية المشتركة على وسيط رسالة أو على نموذج نشر / اشتراك.


المتطلبات الرئيسية لحافلة الرسائل من الجيل التالي هي (المصدر 29West):


• أقل وقت ممكن ممكن (على سبيل المثال، أقل من 100 ميكروثانية)


• الاستقرار تحت الحمل الثقيل (على سبيل المثال، أكثر من 1.4 مليون مسغ / ثانية)


• التحكم والمرونة (مراقبة معدل وشبكات قابلة للتكوين)


هناك جهود في هذه الصناعة لتوحيد ناقل الرسائل. بروتوكول المتقدم قائمة انتظار الرسائل (أمكب) هو مثال على معيار مفتوح بطل من قبل J. P. مورغان تشيس وبدعم من مجموعة من البائعين مثل سيسكو، إنفوي تكنولوجيز، ريد هات، تويست ابتكارات العملية، ايونا، 29West، و إماتيكس. اثنين من الأهداف الرئيسية هي توفير مسار أكثر بساطة إلى قابلية التشغيل للتطبيقات المكتوبة على منصات مختلفة ونمطية بحيث الوسيطة يمكن أن تتطور بسهولة.


وبعبارات عامة جدا، يكون خادم أمكب مشابها لخادم البريد الإلكتروني حيث يعمل كل تبادل كعميل لنقل الرسائل وكل طابور رسالة كعلبة بريد. تحدد الارتباطات جداول التوجيه في كل عامل نقل. يرسل الناشرون رسائل إلى وكلاء النقل الفرديين، ثم يقومون بتوجيه الرسائل إلى صناديق البريد. المستهلكون يأخذون رسائل من علب البريد، مما يخلق نموذجا قويا ومرنا بسيطا (المصدر: أمكب / تيكيويكي / تيكي-index. php؟ بادج = أوبينابرواش # Why_AMQP_).


خدمة رصد الكمون.


المتطلبات الرئيسية لهذه الخدمة هي:


• دقة دقيقة ملي ثانية من القياسات.


• الرؤية في الوقت الحقيقي تقريبا دون إضافة الكمون لحركة التداول.


• القدرة على التفريق بين الكمون معالجة التطبيق من الكمون عبور الشبكة.


• القدرة على التعامل مع معدلات رسالة عالية.


• توفير واجهة برنامجية لتداول التطبيقات لتلقي البيانات الكمون، وبالتالي تمكين محركات التداول خوارزمية للتكيف مع الظروف المتغيرة.


• ربط أحداث الشبكة مع أحداث التطبيق لأغراض استكشاف الأخطاء وإصلاحها.


وميكن تعريف الكمون على أنه الفاصل الزمني بني وقت إرسال أمر جتاري وعندما يسلم الطرف املستلم بنفس األمر ويبت فيه.


معالجة مشكلة الكمون هي مشكلة معقدة، تتطلب نهجا شاملا يحدد جميع مصادر الكمون وينطبق تقنيات مختلفة في طبقات مختلفة من النظام.


الشكل 5 يصور مجموعة متنوعة من المكونات التي يمكن أن تقدم الكمون في كل طبقة من كومة أوسي. كما أنه يرسم كل مصدر للكامون مع حل ممكن وحل رصد. هذا النهج الطبقات يمكن أن تعطي الشركات طريقة أكثر تنظيما للهجوم على قضية الكمون، حيث يمكن اعتبار كل مكون على أنها خدمة ومعاملتها باستمرار في جميع أنحاء الشركة.


يمكن أن يكون الحفاظ على قياس دقيق للحالة الديناميكية لهذه الفترة الزمنية عبر الطرق والوجهات البديلة مساعدة كبيرة في القرارات التجارية التكتيكية. القدرة على تحديد الموقع الدقيق للتأخير، سواء في شبكة حافة العميل، مركز المعالجة المركزية، أو مستوى تطبيق المعاملة، يحدد بشكل كبير قدرة مقدمي الخدمات على الوفاء باتفاقاتهم على مستوى الخدمات التجارية (سلا). وفيما يتعلق بأشكال الشراء والبيع، وكذلك لمشتركي بيانات السوق، فإن التحديد السريع للعقبات وإزالتها يترجم مباشرة إلى فرص وفرص تجارية معززة.


الشكل 5 معمارية إدارة الكمون.


أدوات مراقبة سيسكو منخفضة الكمون.


تعمل أدوات مراقبة الشبكة التقليدية مع دقائق أو دقيقة تحبب. تتطلب منصات التداول من الجيل التالي، وخاصة تلك التي تدعم التداول الخوارزمي، تأخيرات أقل من 5 مللي ثانية ومستويات منخفضة للغاية من فقدان الحزمة. على شبكة جيجابت لان، يمكن أن يسبب ميكروبرست 100 مللي ثانية 10000 المعاملات إلى أن تضيع أو تأخر بشكل مفرط.


تقدم سيسكو لعملائها مجموعة من الأدوات لقياس وقت الاستجابة في بيئة تجارية:


• مدير جودة عرض النطاق الترددي (بم) (أوم من كورفيل)


• حلول مراقبة الكمون للخدمات المالية المعتمدة على سيسكو أون (فسمز)


مدير جودة عرض النطاق الترددي.


عرض النطاق الترددي مدير الجودة (بم) 4.0 هو الجيل القادم من تطبيق إدارة أداء شبكة التطبيقات التي تمكن العملاء من رصد وتوفير شبكتها لمستويات التحكم من الكمون وفقدان الأداء. في حين أن بم لا يستهدف حصرا شبكات التداول، الرؤية ميكروثانية جنبا إلى جنب مع ميزات توفير عرض النطاق الترددي ذكي يجعلها مثالية لهذه البيئات تطلبا.


سيسكو بم 4.0 تنفذ مجموعة واسعة من براءات الاختراع وبراءات الاختراع في انتظار تحليل حركة المرور وتقنيات تحليل الشبكة التي تعطي للمستخدم رؤية لم يسبق لها مثيل وفهم كيفية تحسين الشبكة لأقصى قدر من الأداء التطبيق.


ويدعم سيسكو بم الآن على عائلة المنتج من سيسكو تطبيق نشر المحرك (إيد). إن عائلة منتج سيسكو أد هي المنصة المفضلة لتطبيقات إدارة شبكة سيسكو.


فوائد بم.


سيسكو بم الصغرى الرؤية هو القدرة على كشف وقياس وتحليل الكمون، غضب، وفقدان أحداث المرور إلى أسفل إلى مستويات ميكروثانية من دقة مع كل حزمة القرار. وهذا يمكن سيسكو بم من الكشف عن تأثير أحداث حركة المرور وتحديدها على وقت استجابة الشبكة والارتعاش والخسارة. ومن الأمور الحاسمة بالنسبة لبيئات التداول أن بسم يمكن أن يدعم قياسات الكمون والخسارة والارتعاش في اتجاه واحد لكل من حركة بروتوكول الإنترنت (تكب) و أودب (الإرسال المتعدد). وهذا يعني أنه يقدم تقارير بسلاسة لكل من بيانات حركة المرور وبيانات السوق.


بم يسمح للمستخدم لتحديد مجموعة شاملة من العتبات (ضد النشاط ميكروبورست، الكمون، خسارة، غضب، والاستخدام، وما إلى ذلك) على جميع الواجهات. ثم بم قم بتشغيل الخلفية المتداول التقاط الحزمة. عند حدوث انتهاك عتبة أو حدث آخر من تدهور الأداء المحتمل، فإنه يؤدي سيسكو بم لتخزين التقاط الحزمة إلى القرص لتحليلها لاحقا. ويتيح ذلك للمستخدم فحص التفاصيل الكاملة لحركة التطبيقات التي تأثرت بتدهور الأداء (& كوت؛ الضحايا & كوت؛) والزيارات التي تسببت في تدهور الأداء (& كوت؛ الجناة & كوت؛). وهذا يمكن أن يقلل كثيرا من الوقت الذي يقضيه تشخيص وحل مشكلات أداء الشبكة.


بم هو أيضا قادرة على تقديم عرض النطاق الترددي مفصل وجودة الخدمة (كوس) توصيات توفير السياسة، والتي يمكن للمستخدم تطبيق مباشرة لتحقيق أداء الشبكة المطلوب.


قياسات بم.


لفهم الفرق بين بعض تقنيات القياس التقليدية والرؤية التي توفرها إدارة الجودة الشاملة، يمكننا أن ننظر إلى بعض الرسوم البيانية المقارنة. وفي المجموعة الأولى من الرسوم البيانية (الشكل 6 والشكل 7)، نرى الفرق بين الكمون الذي تقاسه مراقبة نوعية الشبكة السلبي (بمم) بم و الكمون الذي يقاس عن طريق حقن رزم بينغ كل ثانية واحدة في تيار الحركة.


في الشكل 6، نرى الكمون ذكرت من قبل 1 إيمب حزم بينغ إيمب لحركة مرور الشبكة الحقيقية (يتم تقسيمها إلى 2 لإعطاء تقدير للتأخير في اتجاه واحد). فإنه يدل على تأخير مريح أقل من حوالي 5ms تقريبا في كل وقت.


الشكل 6 زمن الاستجابة المبلغ عنه بواسطة حزمة إيمب من حزم إيمب لحركة الشبكة الحقيقية.


في الشكل 7، نرى الكمون ذكرت من قبل بنم لنفس حركة المرور في نفس الوقت. هنا نرى أنه من خلال قياس الكمون في اتجاه واحد من حزم التطبيق الفعلي، نحصل على صورة مختلفة جذريا. هنا ينظر إلى الكمون أن تحوم حوالي 20 مللي ثانية، مع انفجارات في بعض الأحيان أعلى بكثير. التفسير هو أن بينغ يرسل الحزم فقط كل ثانية، فإنه مفقود تماما معظم الكمون حركة المرور التطبيق. وفي الواقع، تشير نتائج بينغ عادة إلى تأخر الانتشار ذهابا وإيابا بدلا من تأخر التطبيق الفعلي عبر الشبكة.


الشكل 7 زمن الاستجابة المبلغ عنه من قبل إدارة الجودة الشاملة لحركة الشبكة الحقيقية.


في المثال الثاني (الشكل 8)، نرى الفرق في تحميل الارتباط أو مستويات التشبع المبلغ عنها بين عرض متوسط ​​5 دقائق وعرض 5 ميكروبورست مس (بم يمكن أن يقدم تقريرا عن ميكروبورستس وصولا الى حوالي 10-100 دقة نانوثانية). ويظهر الخط الأخضر متوسط ​​الانتفاع بمتوسط ​​5 دقائق منخفضا، ربما يصل إلى 5 مبيتس / s. يظهر مؤامرة زرقاء داكنة 5ms نشاط الميكروبورست تصل بين 75 ميغابت / ثانية و 100 ميغابت / ثانية، وسرعة لان بشكل فعال. بقم يظهر هذا المستوى من الدقة لجميع التطبيقات، كما أنه يعطي قواعد واضحة للتمكين لتمكين المستخدم من السيطرة على أو تحييد هذه ميكروبورستس.


الشكل 8 الفرق في الوصلة المبلغ عنها الحمولة بين عرض متوسط ​​قدره 5 دقائق وعرض ميكروبورست مس 5.


نشر بم في شبكة التداول.


ويبين الشكل 9 انتشارا نموذجيا لإدارة الجودة في شبكة تجارية.


الشكل 9 نموذجي نشر بم في شبكة التداول.


ويمكن بعد ذلك استخدام بم للإجابة على هذه الأنواع من الأسئلة:


• هل هناك أي من وصلات جيجابت لان الأساسية المشبعة لأكثر من X ميلي ثانية؟ هل هذا يسبب خسارة؟ ما هي الروابط الأكثر فائدة من الترقية إلى إيثيرشانل أو 10 جيجابت سرعات؟


• ما حركة المرور التطبيق يسبب تشبع بلدي 1 جيجابت الروابط؟


• هل تعاني أي من بيانات السوق من خسارة من طرف إلى طرف؟


• كم من الكمون الإضافي الذي يواجهه مركز بيانات تجاوز الفشل؟ هل هذا الرابط بحجم صحيح للتعامل مع ميكروبورستس؟


• هل يحصل المتداولون على تحديثات كمون منخفضة من طبقة توزيع بيانات السوق؟ هل يرون أي تأخيرات أكبر من X مللي ثانية؟


أن تكون قادرة على الإجابة على هذه الأسئلة ببساطة وفعالية يوفر الوقت والمال في إدارة شبكة التداول.


بم هو أداة أساسية للحصول على وضوح في بيانات السوق وبيئات التداول. وهو يوفر قياسات الكمون الحبيبية من طرف إلى طرف في البنى التحتية المعقدة التي تشهد حركة بيانات كبيرة الحجم. الكشف الفعال ميكروبورستس في مستويات ميلي ثانية واحدة وتلقي تحليل الخبراء على حدث معين لا تقدر بثمن للمهندسين المعماريين الطابق التجاري. وتوفر توصيات توفير النطاق الترددي الذكي، مثل التحجيم وتحليل ما إذا كان، مرونة أكبر للاستجابة لظروف السوق المتقلبة. ومع استمرار انفجار التداول الخوارزمي وزيادة معدلات الرسائل، يوفر بم، مقترنا بأداة جودة الخدمة، القدرة على تنفيذ سياسات جودة الخدمة التي يمكن أن تحمي تطبيقات التداول الحاسمة.


سيسكو الخدمات المالية الكمون رصد الحل.


وتعاونت سيسكو ومقاييس التداول على حلول رصد الكمون لتدفق النظام فيكس ومراقبة بيانات السوق. تعد تقنية سيسكو أون الأساس لفئة جديدة من المنتجات والحلول المضمنة في الشبكة والتي تساعد على دمج الشبكات الذكية مع البنية التحتية للتطبيقات، استنادا إلى أي من البنى المعمارية الموجهة نحو الخدمة أو التقليدية. مقاييس التداول هي الشركة الرائدة في مجال توفير برامج تحليلية للبنية التحتية للشبكة وأغراض رصد الكمون التطبيق (ترادينغمتريكس /).


يرتبط حل مراقبة وقت الاستجابة لخدمات سيسكو أون للخدمات المالية (فسمز) بنوعين من الأحداث عند نقطة المراقبة:


• ترتبط أحداث الشبكة مباشرة مع التعامل مع الرسائل تطبيق متزامن.


• تدفق النظام التجاري ومطابقة الأحداث تحديث السوق.


وباستخدام الطوابع الزمنية التي يتم تأكيدها عند نقطة التقاطها في الشبكة، يسمح التحليل الفوري لهذه التدفقات المترابطة للبيانات بتحديد دقيق للاختناقات عبر البنية التحتية أثناء تنفيذ التجارة أو توزيع بيانات السوق. من خلال رصد وقياس الكمون في وقت مبكر من الدورة، يمكن للشركات المالية اتخاذ قرارات أفضل بشأن أي خدمة الشبكة والتي وسيط، والسوق، أو الطرف المقابل - لاختيار لتوجيه أوامر التجارة. وبالمثل، تتيح هذه المعرفة مزيدا من التبسيط في الوصول إلى بيانات السوق المحدثة (أسعار الأسهم، والأخبار الاقتصادية، وما إلى ذلك)، وهو ما يشكل أساسا هاما لبدء الفرص السوقية أو الانسحاب منها أو متابعتها.


مكونات الحل هي:


• الأجهزة أون في ثلاثة عوامل شكل:


- وحدة الشبكة أون ل سيسكو 2600/2800/3700/3800 الموجهات.


- شفرة أون لسلسلة محفز سيسكو 6500.


- أون 8340 الأجهزة.


• برنامج المقاييس M & أمب؛ A 2.0، الذي يوفر تطبيق الرصد والتنبيه، يعرض الرسوم البيانية الكمون على لوحة القيادة، ويصدر تنبيهات عند حدوث تباطؤ (ترادينغمتريكس / TM_brochure. pdf).


الشكل 10 المراقبة المتأخرة فيكس في أون.


سلا إب سلا.


سلا إب سلا هي أداة إدارة الشبكة المضمنة في سيسكو يوس التي تسمح للموجهات والمفاتيح بإنشاء تيارات حركة اصطناعية يمكن قياسها من حيث الكمون والارتعاش وفقدان الرزم ومعايير أخرى (سيسكو / غو / إيبسلا).


ومفهومان رئيسيان هما مصدر الحركة المتولدة والهدف. كل من هذه تشغيل سلا إب & كوت؛ المستجيب، & كوت؛ التي تتحمل المسؤولية عن الطابع الزمني حركة المرور قبل أن مصدرها وعادتها الهدف (لقياس ذهابا وإيابا). يمكن الحصول على أنواع حركة المرور المختلفة ضمن اتفاقية مستوى الخدمة (سلا) الخاصة ب إب، وهي تستهدف مقاييس مختلفة وتستهدف مختلف الخدمات والتطبيقات. وتستخدم عملية الارتعاش أودب لقياس التأخير في اتجاه واحد ورحلة ذهابا وإيابا الاختلافات. وبما أن حركة مرور الوقت مختومة على كل من إرسال واستهداف الأجهزة باستخدام قدرة المجيب، ويتميز تأخير رحلة ذهابا وإيابا كما دلتا بين الطابعين اثنين.


تم إدخال ميزة جديدة في يوس 12.3 (14) T، إب سلا الفرعية ميلي ثانية التقرير، والذي يسمح للطوابع الزمنية ليتم عرضها مع قرار في ميكروثانية، وبالتالي توفير مستوى من التفاصيل غير متوفرة سابقا. وقد جعلت هذه الميزة الجديدة الآن سلا الملكية الفكرية ذات الصلة لشبكات الحرم الجامعي حيث الكمون الشبكة عادة ما يكون في حدود 300-800 ميكروثانية والقدرة على الكشف عن الاتجاهات والارتفاع (اتجاهات موجزة) على أساس ميكروثانية عدادات الحبيبية هو شرط للعملاء العاملين في الوقت وبيئات التداول الإلكترونية الحساسة.


ونتيجة لذلك، فإن أعدادا كبيرة من المنظمات المالية تنظر حاليا في اتفاقية مستوى الخدمة في مجال الملكية الفكرية لأنها تواجه جميعا متطلبات:


• الإبلاغ عن الكمون الأساسي لمستخدميها.


• اتجاھات خط الأساس للاتجاه بمرور الوقت.


• الاستجابة بسرعة لرشقات نارية المرور التي تسبب تغييرات في الكمون ذكرت.


ومن الضروري تقديم تقارير بالميلي ثانية الثانية لهؤلاء العملاء، نظرا لأن العديد من أقسام الحرم الجامعي والعمود الفقري تقدم حاليا في إطار فترة زمنية ثانية من الكمون عبر عدة قفزات تبديل. عملت بيئات التداول الإلكترونية عموما للقضاء على أو تقليل جميع المناطق من الكمون الجهاز وشبكة لتقديم وفاء النظام السريع لرجال الأعمال. الإبلاغ عن أن أوقات استجابة الشبكة هي & كوت؛ أقل من مللي ثانية واحدة فقط & كوت؛ لم يعد كافيا؛ فإن دقة قياسات الكمون المبلغ عنها عبر شريحة الشبكة أو العمود الفقري تحتاج إلى أن تكون أقرب إلى 300-800 ثانية صغيرة بدرجة من الدقة 100 & إغراف؛ ثواني.


وأضافت اتفاقية مستوى الخدمة (سلا) إب مؤخرا دعما لتدفقات اختبار البث المتعدد إب، التي يمكن أن تقيس الكمون السوق البيانات.


يتم عرض طوبولوجيا الشبكة النموذجية في الشكل 11 مع أجهزة التوجيه الظل ل سلا إب والمصادر والمستجيبين.


الشكل 11 نشر اتفاقية مستوى الخدمة (سلا).


خدمات الحوسبة.


تغطي خدمات الحوسبة مجموعة واسعة من التقنيات بهدف إزالة الذاكرة واختناقات وحدة المعالجة المركزية التي تم إنشاؤها بواسطة معالجة حزم الشبكة. تطبيقات التداول تستهلك كميات كبيرة من بيانات السوق والخوادم تضطر إلى تخصيص الموارد لمعالجة حركة مرور الشبكة بدلا من معالجة التطبيق.


• معالجة النقل - بسرعة عالية، يمكن معالجة حزمة الشبكة تستهلك كمية كبيرة من دورات وحدة المعالجة المركزية الخادم والذاكرة. القاعدة المعمول بها تنص على أن 1Gbps من عرض النطاق الترددي للشبكة يتطلب 1 غيغاهرتز من قدرة المعالج (مصدر إنتل ورقة بيضاء على I / O التسارع إنتل / التكنولوجيا / إوااسيليراتيون / 306517.pdf).


• نسخ المخزن المؤقت المؤقت - في تنفيذ كومة الشبكة التقليدية، يحتاج إلى نسخ البيانات من قبل وحدة المعالجة المركزية بين المخازن المؤقتة الشبكة والمخازن المؤقتة التطبيق. هذه الحالة العامة تزداد سوءا بسبب حقيقة أن سرعات الذاكرة لم تواكب الزيادة في سرعات وحدة المعالجة المركزية. على سبيل المثال، تقترب معالجات مثل إنتل زيون من 4 غيغاهرتز، في حين أن رقائق ذاكرة الوصول العشوائي تحوم حول 400MHz (ل در 3200 الذاكرة) (مصدر إنتل إنتل / التكنولوجيا / إوااسيليراتيون / 306517.pdf).


• تبديل السياق - في كل مرة تحتاج إلى معالجة حزمة الفردية، وحدة المعالجة المركزية ينفذ مفتاح السياق من سياق التطبيق إلى سياق حركة مرور الشبكة. يمكن تخفيض هذه النفقات العامة إذا كان التبديل سيحدث فقط عند اكتمال المخزن المؤقت للتطبيق الكامل.


الشكل 12 مصادر النفقات العامة في خوادم مراكز البيانات.


• تكب حمولة المحرك (تو) - Offloads دورات المعالج النقل إلى نيك. نقل نسخ المخزن المؤقت المكدس بروتوكول تكب / إب من ذاكرة النظام إلى ذاكرة نيك.


• الوصول إلى الذاكرة المباشرة عن بعد (ردما) - تمكين محول شبكة لنقل البيانات مباشرة من التطبيق إلى التطبيق دون إشراك نظام التشغيل. يزيل النسخ المؤقت وسيطة التطبيق (استهلاك عرض النطاق الترددي الذاكرة).


• نواة الالتفافية - الوصول المباشر على مستوى المستخدم إلى الأجهزة. يقلل بشكل كبير من مفاتيح سياق التطبيق.


الشكل 13 ردما و كيرنيل بايباس.


إنفينيباند عبارة عن وصلة اتصال تسلسلي ثنائي الاتجاه من نقطة إلى نقطة (نسيج متغير) يقوم بتنفيذ ردما، من بين ميزات أخرى. تقدم سيسكو مفتاح إنفينيباند و سويتش فابريك سويتش (سفس): سيسكو / أبليكاتيون / بدف / إن / أوس / غوست / نيتسول / ns500 / c643 / cdccont_0900aecd804c35cb. pdf.


الشكل 14 النموذج النموذجي لنشر النظام.


تستفيد تطبيقات التداول من الحد من تغير الكمون والكمون، كما أثبت ذلك الاختبار الذي أجري مع سفس سفس و وومبات معالجات الأعلاف بواسطة ستاك ريزارتش:


تطبيق خدمة المحاكاة الافتراضية.


دي اقتران التطبيق من نظام التشغيل الأساسي وأجهزة الخادم تمكنهم من تشغيل خدمات الشبكة. يمكن تشغيل تطبيق واحد بالتوازي على خوادم متعددة، أو يمكن تشغيل تطبيقات متعددة على نفس الخادم، كما يملي أفضل تخصيص الموارد. ويتيح هذا الفصل تحسين موازنة التحميل والتعافي من الكوارث لاستراتيجيات استمرارية الأعمال. عملية إعادة تخصيص موارد الحوسبة إلى تطبيق ديناميكية. باستخدام نظام ظاهري للتطبيق مثل غريدزرفر داتا سينابس، يمكن للتطبيقات ترحيل، باستخدام سياسات تم تهيئتها مسبقا، إلى خوادم غير مستخدمة بشكل كاف في عملية العرض-التطابق-الطلب (نيتوركورلد / سوب / 2005 / ndc1 / 022105virtual. html؟ بادج = 2) .


هناك العديد من المزايا التجارية للشركات المالية التي تعتمد تطبيق الظاهري:


• وقت أسرع لتسويق المنتجات والخدمات الجديدة.


• تكامل الشركات بشكل أسرع بعد أنشطة الدمج واالستحواذ.


• زيادة توافر التطبيق.


• توزيع أفضل لعبء العمل، مما يخلق المزيد & كوت؛ غرفة رأس & كوت؛ لمعالجة المسامير في حجم التداول.


• الكفاءة التشغيلية والرقابة.


• Reduction in IT complexity.


Currently, application virtualization is not used in the trading front-office. One use-case is risk modeling, like Monte Carlo simulations. As the technology evolves, it is conceivable that some the trading platforms will adopt it.


Data Virtualization Service.


To effectively share resources across distributed enterprise applications, firms must be able to leverage data across multiple sources in real-time while ensuring data integrity. With solutions from data virtualization software vendors such as Gemstone or Tangosol (now Oracle), financial firms can access heterogeneous sources of data as a single system image that enables connectivity between business processes and unrestrained application access to distributed caching. The net result is that all users have instant access to these data resources across a distributed network (gridtoday/03/0210/101061.html).


This is called a data grid and is the first step in the process of creating what Gartner calls Extreme Transaction Processing (XTP) (gartner/DisplayDocument? ref=g_search&id=500947). Technologies such as data and applications virtualization enable financial firms to perform real-time complex analytics, event-driven applications, and dynamic resource allocation.


One example of data virtualization in action is a global order book application. An order book is the repository of active orders that is published by the exchange or other market makers. A global order book aggregates orders from around the world from markets that operate independently. The biggest challenge for the application is scalability over WAN connectivity because it has to maintain state. Today's data grids are localized in data centers connected by Metro Area Networks (MAN). This is mainly because the applications themselves have limits—they have been developed without the WAN in mind.


Figure 15 GemStone GemFire Distributed Caching.


Before data virtualization, applications used database clustering for failover and scalability. This solution is limited by the performance of the underlying database. Failover is slower because the data is committed to disc. With data grids, the data which is part of the active state is cached in memory, which reduces drastically the failover time. Scaling the data grid means just adding more distributed resources, providing a more deterministic performance compared to a database cluster.


Multicast Service.


Market data delivery is a perfect example of an application that needs to deliver the same data stream to hundreds and potentially thousands of end users. Market data services have been implemented with TCP or UDP broadcast as the network layer, but those implementations have limited scalability. Using TCP requires a separate socket and sliding window on the server for each recipient. UDP broadcast requires a separate copy of the stream for each destination subnet. Both of these methods exhaust the resources of the servers and the network. The server side must transmit and service each of the streams individually, which requires larger and larger server farms. On the network side, the required bandwidth for the application increases in a linear fashion. For example, to send a 1 Mbps stream to 1000recipients using TCP requires 1 Gbps of bandwidth.


IP multicast is the only way to scale market data delivery. To deliver a 1 Mbps stream to 1000 recipients, IP multicast would require 1 Mbps. The stream can be delivered by as few as two servers—one primary and one backup for redundancy.


There are two main phases of market data delivery to the end user. In the first phase, the data stream must be brought from the exchange into the brokerage's network. Typically the feeds are terminated in a data center on the customer premise. The feeds are then processed by a feed handler, which may normalize the data stream into a common format and then republish into the application messaging servers in the data center.


The second phase involves injecting the data stream into the application messaging bus which feeds the core infrastructure of the trading applications. The large brokerage houses have thousands of applications that use the market data streams for various purposes, such as live trades, long term trending, arbitrage, etc. Many of these applications listen to the feeds and then republish their own analytical and derivative information. For example, a brokerage may compare the prices of CSCO to the option prices of CSCO on another exchange and then publish ratings which a different application may monitor to determine how much they are out of synchronization.


Figure 16 Market Data Distribution Players.


The delivery of these data streams is typically over a reliable multicast transport protocol, traditionally Tibco Rendezvous. Tibco RV operates in a publish and subscribe environment. Each financial instrument is given a subject name, such as CSCO. last. Each application server can request the individual instruments of interest by their subject name and receive just a that subset of the information. This is called subject-based forwarding or filtering. Subject-based filtering is patented by Tibco.


A distinction should be made between the first and second phases of market data delivery. The delivery of market data from the exchange to the brokerage is mostly a one-to-many application. The only exception to the unidirectional nature of market data may be retransmission requests, which are usually sent using unicast. The trading applications, however, are definitely many-to-many applications and may interact with the exchanges to place orders.


Figure 17 Market Data Architecture.


Design Issues.


Number of Groups/Channels to Use.


Many application developers consider using thousand of multicast groups to give them the ability to divide up products or instruments into small buckets. Normally these applications send many small messages as part of their information bus. Usually several messages are sent in each packet that are received by many users. Sending fewer messages in each packet increases the overhead necessary for each message.


In the extreme case, sending only one message in each packet quickly reaches the point of diminishing returns—there is more overhead sent than actual data. Application developers must find a reasonable compromise between the number of groups and breaking up their products into logical buckets.


Consider, for example, the Nasdaq Quotation Dissemination Service (NQDS). The instruments are broken up alphabetically:


Another example is the Nasdaq Totalview service, broken up this way:


This approach allows for straight forward network/application management, but does not necessarily allow for optimized bandwidth utilization for most users. A user of NQDS that is interested in technology stocks, and would like to subscribe to just CSCO and INTL, would have to pull down all the data for the first two groups of NQDS. Understanding the way users pull down the data and then organize it into appropriate logical groups optimizes the bandwidth for each user.


In many market data applications, optimizing the data organization would be of limited value. Typically customers bring in all data into a few machines and filter the instruments. Using more groups is just more overhead for the stack and does not help the customers conserve bandwidth. Another approach might be to keep the groups down to a minimum level and use UDP port numbers to further differentiate if necessary. The other extreme would be to use just one multicast group for the entire application and then have the end user filter the data. In some situations this may be sufficient.


Intermittent Sources.


A common issue with market data applications are servers that send data to a multicast group and then go silent for more than 3.5 minutes. These intermittent sources may cause trashing of state on the network and can introduce packet loss during the window of time when soft state and then hardware shorts are being created.


PIM-Bidir or PIM-SSM.


The first and best solution for intermittent sources is to use PIM-Bidir for many-to-many applications and PIM-SSM for one-to-many applications.


Both of these optimizations of the PIM protocol do not have any data-driven events in creating forwarding state. That means that as long as the receivers are subscribed to the streams, the network has the forwarding state created in the hardware switching path.


Intermittent sources are not an issue with PIM-Bidir and PIM-SSM.


Null Packets.


In PIM-SM environments a common method to make sure forwarding state is created is to send a burst of null packets to the multicast group before the actual data stream. The application must efficiently ignore these null data packets to ensure it does not affect performance. The sources must only send the burst of packets if they have been silent for more than 3 minutes. A good practice is to send the burst if the source is silent for more than a minute. Many financials send out an initial burst of traffic in the morning and then all well-behaved sources do not have problems.


Periodic Keepalives or Heartbeats.


An alternative approach for PIM-SM environments is for sources to send periodic heartbeat messages to the multicast groups. This is a similar approach to the null packets, but the packets can be sent on a regular timer so that the forwarding state never expires.


S, G Expiry Timer.


Finally, Cisco has made a modification to the operation of the S, G expiry timer in IOS. There is now a CLI knob to allow the state for a S, G to stay alive for hours without any traffic being sent. The (S, G) expiry timer is configurable. This approach should be considered a workaround until PIM-Bidir or PIM-SSM is deployed or the application is fixed.


RTCP Feedback.


A common issue with real time voice and video applications that use RTP is the use of RTCP feedback traffic. Unnecessary use of the feedback option can create excessive multicast state in the network. If the RTCP traffic is not required by the application it should be avoided.


Fast Producers and Slow Consumers.


Today many servers providing market data are attached at Gigabit speeds, while the receivers are attached at different speeds, usually 100Mbps. This creates the potential for receivers to drop packets and request re-transmissions, which creates more traffic that the slowest consumers cannot handle, continuing the vicious circle.


The solution needs to be some type of access control in the application that limits the amount of data that one host can request. QoS and other network functions can mitigate the problem, but ultimately the subscriptions need to be managed in the application.


Tibco Heartbeats.


TibcoRV has had the ability to use IP multicast for the heartbeat between the TICs for many years. However, there are some brokerage houses that are still using very old versions of TibcoRV that use UDP broadcast support for the resiliency. This limitation is often cited as a reason to maintain a Layer 2 infrastructure between TICs located in different data centers. These older versions of TibcoRV should be phased out in favor of the IP multicast supported versions.


Multicast Forwarding Options.


PIM Sparse Mode.


The standard IP multicast forwarding protocol used today for market data delivery is PIM Sparse Mode. It is supported on all Cisco routers and switches and is well understood. PIM-SM can be used in all the network components from the exchange, FSP, and brokerage.


There are, however, some long-standing issues and unnecessary complexity associated with a PIM-SM deployment that could be avoided by using PIM-Bidir and PIM-SSM. These are covered in the next sections.


The main components of the PIM-SM implementation are:


• PIM Sparse Mode v2.


• Shared Tree (spt-threshold infinity)


A design option in the brokerage or in the exchange.


Details of Anycast RP can be found in:


The classic high availability design for Tibco in the brokerage network is documented in:


Bidirectional PIM.


PIM-Bidir is an optimization of PIM Sparse Mode for many-to-many applications. It has several key advantages over a PIM-SM deployment:


• Better support for intermittent sources.


• No data-triggered events.


One of the weaknesses of PIM-SM is that the network continually needs to react to active data flows. This can cause non-deterministic behavior that may be hard to troubleshoot. PIM-Bidir has the following major protocol differences over PIM-SM:


– No source registration.


Source traffic is automatically sent to the RP and then down to the interested receivers. There is no unicast encapsulation, PIM joins from the RP to the first hop router and then registration stop messages.


All PIM-Bidir traffic is forwarded on a *,G forwarding entry. The router does not have to monitor the traffic flow on a *,G and then send joins when the traffic passes a threshold.


– No need for an actual RP.


The RP does not have an actual protocol function in PIM-Bidir. The RP acts as a routing vector in which all the traffic converges. The RP can be configured as an address that is not assigned to any particular device. This is called a Phantom RP.


– No need for MSDP.


MSDP provides source information between RPs in a PIM-SM network. PIM-Bidir does not use the active source information for any forwarding decisions and therefore MSDP is not required.


Bidirectional PIM is ideally suited for the brokerage network in the data center of the exchange. In this environment there are many sources sending to a relatively few set of groups in a many-to-many traffic pattern.


The key components of the PIM-Bidir implementation are:


Further details about Phantom RP and basic PIM-Bidir design are documented in:


Source Specific Multicast.


PIM-SSM is an optimization of PIM Sparse Mode for one-to-many applications. In certain environments it can offer several distinct advantages over PIM-SM. Like PIM-Bidir, PIM-SSM does not rely on any data-triggered events. Furthermore, PIM-SSM does not require an RP at all—there is no such concept in PIM-SSM. The forwarding information in the network is completely controlled by the interest of the receivers.


Source Specific Multicast is ideally suited for market data delivery in the financial service provider. The FSP can receive the feeds from the exchanges and then route them to the end of their network.


Many FSPs are also implementing MPLS and Multicast VPNs in their core. PIM-SSM is the preferred method for transporting traffic in VRFs.


When PIM-SSM is deployed all the way to the end user, the receiver indicates his interest in a particular S, G with IGMPv3. Even though IGMPv3 was defined by RFC 2236 back in October, 2002, it still has not been implemented by all edge devices. This creates a challenge for deploying an end-to-end PIM-SSM service. A transitional solution has been developed by Cisco to enable an edge device that supports IGMPv2 to participate in an PIM-SSM service. This feature is called SSM Mapping and is documented in:


Storage Services.


The service provides storage capabilities into the market data and trading environments. Trading applications access backend storage to connect to different databases and other repositories consisting of portfolios, trade settlements, compliance data, management applications, Enterprise Service Bus (ESB), and other critical applications where reliability and security is critical to the success of the business. The main requirements for the service are:


Storage virtualization is an enabling technology that simplifies management of complex infrastructures, enables non-disruptive operations, and facilitates critical elements of a proactive information lifecycle management (ILM) strategy. EMC Invista running on the Cisco MDS 9000 enables heterogeneous storage pooling and dynamic storage provisioning, allowing allocation of any storage to any application. High availability is increased with seamless data migration. Appropriate class of storage is allocated to point-in-time copies (clones). Storage virtualization is also leveraged through the use of Virtual Storage Area Networks (VSANs), which enable the consolidation of multiple isolated SANs onto a single physical SAN infrastructure, while still partitioning them as completely separate logical entities. VSANs provide all the security and fabric services of traditional SANs, yet give organizations the flexibility to easily move resources from one VSAN to another. This results in increased disk and network utilization while driving down the cost of management. Integrated Inter VSAN Routing (IVR) enables sharing of common resources across VSANs.


Figure 18 High Performance Computing Storage.


Replication of data to a secondary and tertiary data center is crucial for business continuance. Replication offsite over Fiber Channel over IP (FCIP) coupled with write acceleration and tape acceleration provides improved performance over long distance. Continuous Data Replication (CDP) is another mechanism which is gaining popularity in the industry. It refers to backup of computer data by automatically saving a copy of every change made to that data, essentially capturing every version of the data that the user saves. It allows the user or administrator to restore data to any point in time. Solutions from EMC and Incipient utilize the SANTap protocol on the Storage Services Module (SSM) in the MDS platform to provide CDP functionality. The SSM uses the SANTap service to intercept and redirect a copy of a write between a given initiator and target. The appliance does not reside in the data path—it is completely passive. The CDP solutions typically leverage a history journal that tracks all changes and bookmarks that identify application-specific events. This ensures that data at any point in time is fully self-consistent and is recoverable instantly in the event of a site failure.


Backup procedure reliability and performance are extremely important when storing critical financial data to a SAN. The use of expensive media servers to move data from disk to tape devices can be cumbersome. Network-accelerated serverless backup (NASB) helps you back up increased amounts of data in shorter backup time frames by shifting the data movement from multiple backup servers to Cisco MDS 9000 Series multilayer switches. This technology decreases impact on application servers because the MDS offloads the application and backup servers. It also reduces the number of backup and media servers required, thus reducing CAPEX and OPEX. The flexibility of the backup environment increases because storage and tape drives can reside anywhere on the SAN.


Trading Resilience and Mobility.


The main requirements for this service are to provide the virtual trader:


• Fully scalable and redundant campus trading environment.


• Resilient server load balancing and high availability in analytic server farms.


• Global site load balancing that provide the capability to continue participating in the market venues of closest proximity.


A highly-available campus environment is capable of sustaining multiple failures (i. e., links, switches, modules, etc.), which provides non-disruptive access to trading systems for traders and market data feeds. Fine-tuned routing protocol timers, in conjunction with mechanisms such as NSF/SSO, provide subsecond recovery from any failure.


The high-speed interconnect between data centers can be DWDM/dark fiber, which provides business continuance in case of a site failure. Each site is 100km-200km apart, allowing synchronous data replication. Usually the distance for synchronous data replication is 100km, but with Read/Write Acceleration it can stretch to 200km. A tertiary data center can be greater than 200km away, which would replicate data in an asynchronous fashion.


Figure 19 Trading Resilience.


A robust server load balancing solution is required for order routing, algorithmic trading, risk analysis, and other services to offer continuous access to clients regardless of a server failure. Multiple servers encompass a "farm" and these hosts can added/removed without disruption since they reside behind a virtual IP (VIP) address which is announced in the network.


A global site load balancing solution provides remote traders the resiliency to access trading environments which are closer to their location. This minimizes latency for execution times since requests are always routed to the nearest venue.


Figure 20 Virtualization of Trading Environment.


A trading environment can be virtualized to provide segmentation and resiliency in complex architectures. Figure 20 illustrates a high-level topology depicting multiple market data feeds entering the environment, whereby each vendor is assigned its own Virtual Routing and Forwarding (VRF) instance. The market data is transferred to a high-speed InfiniBand low-latency compute fabric where feed handlers, order routing systems, and algorithmic trading systems reside. All storage is accessed via a SAN and is also virtualized with VSANs, allowing further security and segmentation. The normalized data from the compute fabric is transferred to the campus trading environment where the trading desks reside.


Wide Area Application Services.


This service provides application acceleration and optimization capabilities for traders who are located outside of the core trading floor facility/data center and working from a remote office. To consolidate servers and increase security in remote offices, file servers, NAS filers, storage arrays, and tape drives are moved to a corporate data center to increase security and regulatory compliance and facilitate centralized storage and archival management. As the traditional trading floor is becoming more virtual, wide area application services technology is being utilized to provide a "LAN-like" experience to remote traders when they access resources at the corporate site. Traders often utilize Microsoft Office applications, especially Excel in addition to Sharepoint and Exchange. Excel is used heavily for modeling and permutations where sometime only small portions of the file are changed. CIFS protocol is notoriously known to be "chatty," where several message normally traverse the WAN for a simple file operation and it is addressed by Wide Area Application Service (WAAS) technology. Bloomberg and Reuters applications are also very popular financial tools which access a centralized SAN or NAS filer to retrieve critical data which is fused together before represented to a trader's screen.


Figure 21 Wide Area Optimization.


A pair of Wide Area Application Engines (WAEs) that reside in the remote office and the data center provide local object caching to increase application performance. The remote office WAEs can be a module in the ISR router or a stand-alone appliance. The data center WAE devices are load balanced behind an Application Control Engine module installed in a pair of Catalyst 6500 series switches at the aggregation layer. The WAE appliance farm is represented by a virtual IP address. The local router in each site utilizes Web Cache Communication Protocol version 2 (WCCP v2) to redirect traffic to the WAE that intercepts the traffic and determines if there is a cache hit or miss. The content is served locally from the engine if it resides in cache; otherwise the request is sent across the WAN the initial time to retrieve the object. This methodology optimizes the trader experience by removing application latency and shielding the individual from any congestion in the WAN.


WAAS uses the following technologies to provide application acceleration:


• Data Redundancy Elimination (DRE) is an advanced form of network compression which allows the WAE to maintain a history of previously-seen TCP message traffic for the purposes of reducing redundancy found in network traffic. This combined with the Lempel-Ziv (LZ) compression algorithm reduces the number of redundant packets that traverse the WAN, which improves application transaction performance and conserves bandwidth.


• Transport Flow Optimization (TFO) employs a robust TCP proxy to safely optimize TCP at the WAE device by applying TCP-compliant optimizations to shield the clients and servers from poor TCP behavior because of WAN conditions. By running a TCP proxy between the devices and leveraging an optimized TCP stack between the devices, many of the problems that occur in the WAN are completely blocked from propagating back to trader desktops. The traders experience LAN-like TCP response times and behavior because the WAE is terminating TCP locally. TFO improves reliability and throughput through increases in TCP window scaling and sizing enhancements in addition to superior congestion management.


Thin Client Service.


This service provides a "thin" advanced trading desktop which delivers significant advantages to demanding trading floor environments requiring continuous growth in compute power. As financial institutions race to provide the best trade executions for their clients, traders are utilizing several simultaneous critical applications that facilitate complex transactions. It is not uncommon to find three or more workstations and monitors at a trader's desk which provide visibility into market liquidity, trading venues, news, analysis of complex portfolio simulations, and other financial tools. In addition, market dynamics continue to evolve with Direct Market Access (DMA), ECNs, alternative trading volumes, and upcoming regulation changes with Regulation National Market System (RegNMS) in the US and Markets in Financial Instruments Directive (MiFID) in Europe. At the same time, business seeks greater control, improved ROI, and additional flexibility, which creates greater demands on trading floor infrastructures.


Traders no longer require multiple workstations at their desk. Thin clients consist of keyboard, mouse, and multi-displays which provide a total trader desktop solution without compromising security. Hewlett Packard, Citrix, Desktone, Wyse, and other vendors provide thin client solutions to capitalize on the virtual desktop paradigm. Thin clients de-couple the user-facing hardware from the processing hardware, thus enabling IT to grow the processing power without changing anything on the end user side. The workstation computing power is stored in the data center on blade workstations, which provide greater scalability, increased data security, improved business continuance across multiple sites, and reduction in OPEX by removing the need to manage individual workstations on the trading floor. One blade workstation can be dedicated to a trader or shared among multiple traders depending on the requirements for computer power.


The "thin client" solution is optimized to work in a campus LAN environment, but can also extend the benefits to traders in remote locations. Latency is always a concern when there is a WAN interconnecting the blade workstation and thin client devices. The network connection needs to be sized accordingly so traffic is not dropped if saturation points exist in the WAN topology. WAN Quality of Service (QoS) should prioritize sensitive traffic. There are some guidelines which should be followed to allow for an optimized user experience. A typical highly-interactive desktop experience requires a client-to-blade round trip latency of <20ms for a 2Kb packet size. There may be a slight lag in display if network latency is between 20ms to 40ms. A typical trader desk with a four multi-display terminal requires 2-3Mbps bandwidth consumption with seamless communication with blade workstation(s) in the data center. Streaming video (800x600 at 24fps/full color) requires 9 Mbps bandwidth usage.


Figure 22 Thin Client Architecture.


Management of a large thin client environment is simplified since a centralized IT staff manages all of the blade workstations dispersed across multiple data centers. A trader is redirected to the most available environment in the enterprise in the event of a particular site failure. High availability is a key concern in critical financial environments and the Blade Workstation design provides rapid provisioning of another blade workstation in the data center. This resiliency provides greater uptime, increases in productivity, and OpEx reduction.


Advanced Encryption Standard.


Advanced Message Queueing Protocol.


Application Oriented Networking.


The Archipelago® Integrated Web book gives investors the unique opportunity to view the entire ArcaEx and ArcaEdge books in addition to books made available by other market participants.


ECN Order Book feed available via NASDAQ.


شيكاغو مجلس التجارة.


Class-Based Weighted Fair Queueing.


Continuous Data Replication.


Chicago Mercantile Exchange is engaged in trading of futures contracts and derivatives.


Central Processing Unit.


Distributed Defect Tracking System.


الوصول المباشر إلى السوق.


Data Redundancy Elimination.


Dense Wavelength Division Multiplexing.


Electronic Communication Network.


Enterprise Service Bus.


Enterprise Solutions Engineering.


FIX Adapted for Streaming.


Fibre Channel over IP.


Financial Information Exchange.


Financial Services Latency Monitoring Solution.


Financial Service Provider.


Information Lifecycle Management.


Instinet Island Book.


Internetworking Operating System.


Keyboard Video Mouse.


Low Latency Queueing.


Metro Area Network.


Multilayer Director Switch.


Markets in Financial Instruments Directive.


Message Passing Interface is an industry standard specifying a library of functions to enable the passing of messages between nodes within a parallel computing environment.


Network Attached Storage.


Network Accelerated Serverless Backup.


Network Interface Card.


Nasdaq Quotation Dissemination Service.


نظام إدارة النظام.


Open Systems Interconnection.


Protocol Independent Multicast.


PIM-Source Specific Multicast.


جودة الخدمة.


Random Access Memory.


Reuters Data Feed.


Reuters Data Feed Direct.


Remote Direct Memory Access.


Regulation National Market System.


Remote Graphics Software.


Reuters Market Data System.


RTP Control Protocol.


Real Time Protocol.


Reuters Wire Format.


Storage Area Network.


Small Computer System Interface.


Sockets Direct Protocol—Given that many modern applications are written using the sockets API, SDP can intercept the sockets at the kernel level and map these socket calls to an InfiniBand transport service that uses RDMA operations to offload data movement from the CPU to the HCA hardware.


Server Fabric Switch.


Secure Financial Transaction Infrastructure network developed to provide firms with excellent communication paths to NYSE Group, AMEX, Chicago Stock Exchange, NASDAQ, and other exchanges. It is often used for order routing.


11 Best Practices for Low Latency Systems.


Its been 8 years since Google noticed that an extra 500ms of latency dropped traffic by 20% and Amazon realized that 100ms of extra latency dropped sales by 1% . Ever since then developers have been racing to the bottom of the latency curve, culminating in front-end developers squeezing every last millisecond out of their JavaScript , CSS , and even HTML . What follows is a random walk through a variety of best practices to keep in mind when designing low latency systems. Most of these suggestions are taken to the logical extreme but of course tradeoffs can be made. (Thanks to an anonymous user for asking this question on Quora and getting me to put my thoughts down in writing).


Choose the right language.


Scripting languages need not apply. Though they keep getting faster and faster, when you are looking to shave those last few milliseconds off your processing time you cannot have the overhead of an interpreted language. Additionally, you will want a strong memory model to enable lock free programming so you should be looking at Java, Scala, C++11 or Go.


Keep it all in memory.


I/O will kill your latency, so make sure all of your data is in memory. This generally means managing your own in-memory data structures and maintaining a persistent log, so you can rebuild the state after a machine or process restart. Some options for a persistent log include Bitcask, Krati, LevelDB and BDB-JE. Alternatively, you might be able to get away with running a local, persisted in-memory database like redis or MongoDB (with memory >> data). Note that you can loose some data on crash due to their background syncing to disk.


Keep data and processing colocated.


Network hops are faster than disk seeks but even still they will add a lot of overhead. Ideally, your data should fit entirely in memory on one host. With AWS providing almost 1/4 TB of RAM in the cloud and physical servers offering multiple TBs this is generally possible. If you need to run on more than one host you should ensure that your data and requests are properly partitioned so that all the data necessary to service a given request is available locally.


Keep the system underutilized.


Low latency requires always having resources to process the request. Don’t try to run at the limit of what your hardware/software can provide. Always have lots of head room for bursts and then some.


Keep context switches to a minimum.


Context switches are a sign that you are doing more compute work than you have resources for. You will want to limit your number of threads to the number of cores on your system and to pin each thread to its own core.


Keep your reads sequential.


All forms of storage, wither it be rotational, flash based, or memory performs significantly better when used sequentially. When issuing sequential reads to memory you trigger the use of prefetching at the RAM level as well as at the CPU cache level. If done properly, the next piece of data you need will always be in L1 cache right before you need it. The easiest way to help this process along is to make heavy use of arrays of primitive data types or structs. Following pointers, either through use of linked lists or through arrays of objects, should be avoided at all costs.


Batch your writes.


This sounds counterintuitive but you can gain significant improvements in performance by batching writes. However, there is a misconception that this means the system should wait an arbitrary amount of time before doing a write. Instead, one thread should spin in a tight loop doing I/O. Each write will batch all the data that arrived since the last write was issued. This makes for a very fast and adaptive system.


Respect your cache.


With all of these optimizations in place, memory access quickly becomes a bottleneck. Pinning threads to their own cores helps reduce CPU cache pollution and sequential I/O also helps preload the cache. Beyond that, you should keep memory sizes down using primitive data types so more data fits in cache. Additionally, you can look into cache-oblivious algorithms which work by recursively breaking down the data until it fits in cache and then doing any necessary processing.


Non blocking as much as possible.


Make friends with non blocking and wait free data structures and algorithms. Every time you use a lock you have to go down the stack to the OS to mediate the lock which is a huge overhead. Often, if you know what you are doing, you can get around locks by understanding the memory model of the JVM, C++11 or Go.


Async as much as possible.


Any processing and particularly any I/O that is not absolutely necessary for building the response should be done outside the critical path.


Parallelize as much as possible.


Any processing and particularly any I/O that can happen in parallel should be done in parallel. For instance if your high availability strategy includes logging transactions to disk and sending transactions to a secondary server those actions can happen in parallel.


Almost all of this comes from following what LMAX is doing with their Disruptor project. Read up on that and follow anything that Martin Thompson does.


شارك هذا:


ذات صلة.


نشرت من قبل.


Benjamin Darfler.


29 thoughts on “11 Best Practices for Low Latency Systems”


And happy to be on your list 🙂


مادة جيدة. One beef: Go doesn’t have a sophisticated memory model like Java or C++11. If your system fits with the go-routine and channels architecture it’s all good else no luck. AFAIK you cannot opt out of the run-time scheduler so no native OS threads and the ability to build your own lock free data structures like (SPSC queues/ring-buffers) is also severely lacking.


شكرا على الرد. While the Go memory model (golang/ref/mem) might not be as robust as Java or C++11 I was under the impression that you could still manage to create lock free data structures using it. For example github/textnode/gringo, github/scryner/lfreequeue and github/mocchira/golfhash. Maybe I’m missing something? Admittedly I know much less about Go than the JVM.


Benjamin, the Go memory model detailed here: golang/ref/mem is mostly in terms of channels and mutexes. I looked through the packages that you listed and while the data structures there are “lock free” they are not equivalent to what one might build in Java/C++11. The sync package as of now, doesn’t have support for relaxed atomics or the acquire/release semantics of C++11. Without that support its difficult to build SPSC data structures as efficient as the ones possible in C++/Java. The projects that you link use atomic. Add… which is a sequentially consistent atomic. It’s built with XADD as it should be – github/tonnerre/golang/blob/master/src/pkg/sync/atomic/asm_amd64.s.


I am not trying to knock Go down. It takes minimal effort to write async IO and concurrent.


code that is sufficiently fast for most people. The std library too is highly tuned for performance. Golang also has support for structs which is missing in Java. But as it stands, I think the simplistic memory model and the go-routine runtime stand in the way of building the kind of systems you are talking about.


Thank you for the in depth reply. I hope people find this back and forth useful.


While a ‘native’ language is probably better, it’s not strictly required. Facebook showed us it can be done in PHP. Granted they use pre-compiled PHP with their HHVM machine. But it’s possible!


Unfortunately PHP still lacks an acceptable memory model even if HHVM significantly the improved execution speed.


While I’ll fight to use higher-level languages just as much as the next guy, I think the only way to achieve the low-latency apps people are looking for is to drop down to a language like C. It seems the tougher it is to write in a language, the faster it executes.


I would strongly recommend you look at the work being done in the projects and blogs that I linked to. The JVM is quickly becoming the hot spot for these types of systems because it provides a strong memory model and garbage collection which enable lock free programming which is nearly or completely impossible with a weak or undefined memory model and reference counters for memory management.


I’ll take a look, Benjamin. Thanks for pointing them out.


Garbage collection for lock free programming is a bit of a deus ex machina. MPMC and SPSC queues can both be built without needing GC. There are also plenty of ways to do lock free programming without garbage collection and reference counting is not the only way. Hazard pointers, RCU, Proxy-Collectors etc all provide support for deferred reclamation and are usually coded in support of an algorithm (not generic), hence they are usually much easier to build. Of course the trade-off lies in the fact that production quality GCs have a lot of work put into them and will help the less experienced programmer write lock-free algorithms (should they be doing this at all?) without coding up deferred reclamation schemes. Some links on work done in this field: cs. toronto. edu/


Yes C/C++ just recently gained a memory model, but that doesn’t mean that they were completely unsuitable for lock-free code earlier. GCC and other high quality compilers had compiler specific directives to do lock free programming on supported platforms for a really long time – it was just not standardized in the language. Linux and other platforms have provided these primitives for some time too. Java’s unique position WAS that it provided a formalized memory model that it guaranteed to work on all supported platforms. Though in principle this is awesome, most server side developers work on one platform (Linux/Windows). They already had the tools to build lock free code for their platform.


GC is a great tool but not a necessary one. It has a cost both in terms of performance and in complexity (all the tricks needed to avoid STW GC). C++11/C11 already have support for proper memory models. Let’s not forget that JVMs have no responsibility to support the Unsafe API in the future. Unsafe code is “unsafe” so you lose the benefits of Java’s safety features. Finally IMO the Unsafe code used to layout memory and simulate structs in Java looks a lot uglier than C/C++ structs where the compiler is doing that work for you in a reliable manner. C and C++ also provide access to all the low level platform specific power tools like the PAUSE ins, SSE/AVX/NEON etc. You can even tune your code layout through linker scripts! The power provided by the C/C++ tool chain is really unmatched by the JVM. Java is a great platform none the less, but I think it’s biggest advantage is that ordinary business logic (90% of your code?) can still depend on GC and safety features and make use of highly tuned and tested libraries written with Unsafe. This is a great trade-off between getting the last 5% of perf and being productive. A trade-off that makes sense for a lot of people but a trade-off none the less. Writing complicated application code in C/C++ is a nightmare after all.


On Mon, Mar 10, 2017 at 12:52 PM, CodeDependents wrote:


وGT. Graham Swan commented: “I’ll take a look, Benjamin. Thanks for > pointing them out.”


Missing the 12th: Do not use garbadge collected languages. GC is a bottleneck in the worstcase. It likely halts all threads. It’s a global. It distracts the architect to manage one of the most crital resources (CPU-near memory) himself.


Actually a lot of this work comes directly from Java. To do lock free programming right you need a clear memory model which c++ only recently gained. If you know how to work with GC and not against it you can create low latency systems often with much more ease.


I have to agree with Ben here. There has been a lot of progress in GC parallelism in the last decade or so with the G1 collector being the latest incantation. It may take a little time to tune the heap and various knobs to get the GC to collect with almost no pause, but this pales in comparison to the developer time it takes to not have GC.


You can even go one step further and create systems that produce so little garbage that you can easily push your GC outside of your operating window. This is how all of the high frequency trading shops do it when running on the JVM.


Garbage collection for lock free programming is a bit of a deus ex machina. MPMC and SPSC queues can both be built without needing GC. There are also plenty of ways to do lock free programming without garbage collection and reference counting is not the only way. Hazard pointers, RCU, Proxy-Collectors etc all provide support for deferred reclamation and are coded in support of an algorithm (not generic), hence they are much easier to build. Of course the trade-off lies in the fact that production quality GCs have a lot of work put into them and will help the less experienced programmer write lock-free algorithms (should they be doing this at all?) without coding up deferred reclamation schemes. Some links on work done in this field: cs. toronto. edu/


Yes C/C++ just recently gained a memory model, but that doesn’t mean that they were completely unsuitable for lock-free code earlier. GCC and other high quality compilers had compiler specific directives to do lock free programming on supported platforms for a really long time – it was just not standardized in the language. Linux and other platforms have provided these primitives for some time too. Java’s unique position WAS that it provided a formalized memory model that it guaranteed to work on all supported platforms. Though in principle this is awesome, most server side developers work on one platform (Linux/Windows). They already had the tools to build lock free code for their platform.


GC is a great tool but not a necessary one. It has a cost both in terms of performance and in complexity (all the tricks needed to delay and avoid STW GC). C++11/C11 already have support for proper memory models. Let’s not forget that JVMs have no responsibility to support the Unsafe API in the future. Unsafe code is “unsafe” so you lose the benefits of Java’s safety features. Finally IMO the Unsafe code used to layout memory and simulate structs in Java looks a lot uglier than C/C++ structs where the compiler is doing that work for you in a reliable manner. C and C++ also provide access to all the low level platform specific power tools like the PAUSE ins, SSE/AVX/NEON etc. You can even tune your code layout through linker scripts! The power provided by the C/C++ tool chain is really unmatched by the JVM. Java is a great platform none the less, but I think it’s biggest advantage is that ordinary business logic (90% of your code?) can still depend on GC and safety features and make use of highly tuned and tested libraries written with Unsafe. This is a great trade-off between getting the last 5% of perf and being productive. A trade-off that makes sense for a lot of people but a trade-off none the less. Writing complicated application code in C/C++ is a nightmare after all.


وGT. Do not use garbadge collected languages.


Or, at least, “traditional” garbage collected languages. Because they are different – while Erlang too has a collector, it doesn’t creates bottlenecks because it doesn’t “stops the world” as Java while collecting garbage – instead it halts individual small “micro-threads” on a microsecond scale, so it’s not noticeable on the large.


Rewrite that to “traditional” garbage collection [i]algorithms[/i]. At LMAX we use Azul Zing, and just by using a different JVM with a different approach to garbage collection, we’ve seen huge improvements in performance, because both major and minor GCs are orders of magnitude cheaper.


There are other costs which offset that, of course: you use a hell of a lot more heap, and Zing isn’t cheap.


Reblogged this on Java Prorgram Examples and commented:


One of the must read article for Java programmers, it’s the lesson you will learn after spending considerable time tuning and developing low latency systems in Java in 10 minutes.


Reviving an old thread, but (amazingly) this has to be pointed out:


1) Higher level languages (eg Java) don’t elicit functionality from the hardware that isn’t available to lower level languages (eg C); to state that so-and-so is “completely impossible” in C while readily doable in Java is complete rubbish without acknowledging that Java runs on virtual hardware where the JVM has to synthesize functionality required by Java but not provided by the physical hardware. If a JVM (eg written in C) can synthesize functionality X, then so can a C programmer.


2) “Lock free” isn’t what people think it is, except almost by coincidence in certain circumstances, such as single core x86; multicore x86 cannot run lock free without memory barriers, which have complexities and cost similar to regular locking. As per 1 above, if lock free works in a given environment, it is because it is supported by hardware, or emulated/synthesised by software in a virtual environment.


Great Points Julius. The point I was trying (maybe unsuccessfully so) is that it’s prohibitively difficult to apply many of these patterns in C since they rely on GC. It goes beyond simply using memory barriers. You have to consider freeing memory as well which gets particularly difficult when you are dealing with lock free and wait free algorithms. This is where GC adds a huge win. That said, I hear Rust has some very interesting ideas around memory ownership that might begin to address some of these issues.


The LMAX Architecture.


LMAX is a new retail financial trading platform. As a result it has to process many trades with low latency. The system is built on the JVM platform and centers on a Business Logic Processor that can handle 6 million orders per second on a single thread. The Business Logic Processor runs entirely in-memory using event sourcing. The Business Logic Processor is surrounded by Disruptors - a concurrency component that implements a network of queues that operate without needing locks. During the design process the team concluded that recent directions in high-performance concurrency models using queues are fundamentally at odds with modern CPU design.


Over the last few years we keep hearing that "the free lunch is over"[1] - we can't expect increases in individual CPU speed. So to write fast code we need to explicitly use multiple processors with concurrent software. This is not good news - writing concurrent code is very hard. Locks and semaphores are hard to reason about and hard to test - meaning we are spending more time worrying about satisfying the computer than we are solving the domain problem. Various concurrency models, such as Actors and Software Transactional Memory, aim to make this easier - but there is still a burden that introduces bugs and complexity.


So I was fascinated to hear about a talk at QCon London in March last year from LMAX. LMAX is a new retail financial trading platform. Its business innovation is that it is a retail platform - allowing anyone to trade in a range of financial derivative products[2]. A trading platform like this needs very low latency - trades have to be processed quickly because the market is moving rapidly. A retail platform adds complexity because it has to do this for lots of people. So the result is more users, with lots of trades, all of which need to be processed quickly.[3]


Given the shift to multi-core thinking, this kind of demanding performance would naturally suggest an explicitly concurrent programming model - and indeed this was their starting point. But the thing that got people's attention at QCon was that this wasn't where they ended up. In fact they ended up by doing all the business logic for their platform: all trades, from all customers, in all markets - on a single thread. A thread that will process 6 million orders per second using commodity hardware.[4]


Processing lots of transactions with low-latency and none of the complexities of concurrent code - how can I resist digging into that? Fortunately another difference LMAX has to other financial companies is that they are quite happy to talk about their technological decisions. So now LMAX has been in production for a while it's time to explore their fascinating design.


Overall Structure.


Figure 1: LMAX's architecture in three blobs.


At a top level, the architecture has three parts.


business logic processor[5] input disruptor output disruptors.


As its name implies, the business logic processor handles all the business logic in the application. As I indicated above, it does this as a single-threaded java program which reacts to method calls and produces output events. Consequently it's a simple java program that doesn't require any platform frameworks to run other than the JVM itself, which allows it to be easily run in test environments.


Although the Business Logic Processor can run in a simple environment for testing, there is rather more involved choreography to get it to run in a production setting. Input messages need to be taken off a network gateway and unmarshaled, replicated and journaled. Output messages need to be marshaled for the network. These tasks are handled by the input and output disruptors. Unlike the Business Logic Processor, these are concurrent components, since they involve IO operations which are both slow and independent. They were designed and built especially for LMAX, but they (like the overall architecture) are applicable elsewhere.


Business Logic Processor.


Keeping it all in memory.


The Business Logic Processor takes input messages sequentially (in the form of a method invocation), runs business logic on it, and emits output events. It operates entirely in-memory, there is no database or other persistent store. Keeping all data in-memory has two important benefits. Firstly it's fast - there's no database to provide slow IO to access, nor is there any transactional behavior to execute since all the processing is done sequentially. The second advantage is that it simplifies programming - there's no object/relational mapping to do. All the code can be written using Java's object model without having to make any compromises for the mapping to a database.


Using an in-memory structure has an important consequence - what happens if everything crashes? Even the most resilient systems are vulnerable to someone pulling the power. The heart of dealing with this is Event Sourcing - which means that the current state of the Business Logic Processor is entirely derivable by processing the input events. As long as the input event stream is kept in a durable store (which is one of the jobs of the input disruptor) you can always recreate the current state of the business logic engine by replaying the events.


A good way to understand this is to think of a version control system. Version control systems are a sequence of commits, at any time you can build a working copy by applying those commits. VCSs are more complicated than the Business Logic Processor because they must support branching, while the Business Logic Processor is a simple sequence.


So, in theory, you can always rebuild the state of the Business Logic Processor by reprocessing all the events. In practice, however, that would take too long should you need to spin one up. So, just as with version control systems, LMAX can make snapshots of the Business Logic Processor state and restore from the snapshots. They take a snapshot every night during periods of low activity. Restarting the Business Logic Processor is fast, a full restart - including restarting the JVM, loading a recent snapshot, and replaying a days worth of journals - takes less than a minute.


Snapshots make starting up a new Business Logic Processor faster, but not quickly enough should a Business Logic Processor crash at 2pm. As a result LMAX keeps multiple Business Logic Processors running all the time[6]. Each input event is processed by multiple processors, but all but one processor has its output ignored. Should the live processor fail, the system switches to another one. This ability to handle fail-over is another benefit of using Event Sourcing.


By event sourcing into replicas they can switch between processors in a matter of micro-seconds. As well as taking snapshots every night, they also restart the Business Logic Processors every night. The replication allows them to do this with no downtime, so they continue to process trades 24/7.


For more background on Event Sourcing, see the draft pattern on my site from a few years ago. The article is more focused on handling temporal relationships rather than the benefits that LMAX use, but it does explain the core idea.


Event Sourcing is valuable because it allows the processor to run entirely in-memory, but it has another considerable advantage for diagnostics. If some unexpected behavior occurs, the team copies the sequence of events to their development environment and replays them there. This allows them to examine what happened much more easily than is possible in most environments.


This diagnostic capability extends to business diagnostics. There are some business tasks, such as in risk management, that require significant computation that isn't needed for processing orders. An example is getting a list of the top 20 customers by risk profile based on their current trading positions. The team handles this by spinning up a replicate domain model and carrying out the computation there, where it won't interfere with the core order processing. These analysis domain models can have variant data models, keep different data sets in memory, and run on different machines.


Tuning performance.


So far I've explained that the key to the speed of the Business Logic Processor is doing everything sequentially, in-memory. Just doing this (and nothing really stupid) allows developers to write code that can process 10K TPS[7]. They then found that concentrating on the simple elements of good code could bring this up into the 100K TPS range. This just needs well-factored code and small methods - essentially this allows Hotspot to do a better job of optimizing and for CPUs to be more efficient in caching the code as it's running.


It took a bit more cleverness to go up another order of magnitude. There are several things that the LMAX team found helpful to get there. One was to write custom implementations of the java collections that were designed to be cache-friendly and careful with garbage[8]. An example of this is using primitive java longs as hashmap keys with a specially written array backed Map implementation ( LongToObjectHashMap ). In general they've found that choice of data structures often makes a big difference, Most programmers just grab whatever List they used last time rather than thinking which implementation is the right one for this context.[9]


Another technique to reach that top level of performance is putting attention into performance testing. I've long noticed that people talk a lot about techniques to improve performance, but the one thing that really makes a difference is to test it. Even good programmers are very good at constructing performance arguments that end up being wrong, so the best programmers prefer profilers and test cases to speculation.[10] The LMAX team has also found that writing tests first is a very effective discipline for performance tests.


Programming Model.


This style of processing does introduce some constraints into the way you write and organize the business logic. The first of these is that you have to tease out any interaction with external services. An external service call is going to be slow, and with a single thread will halt the entire order processing machine. As a result you can't make calls to external services within the business logic. Instead you need to finish that interaction with an output event, and wait for another input event to pick it back up again.


I'll use a simple non-LMAX example to illustrate. Imagine you are making an order for jelly beans by credit card. A simple retailing system would take your order information, use a credit card validation service to check your credit card number, and then confirm your order - all within a single operation. The thread processing your order would block while waiting for the credit card to be checked, but that block wouldn't be very long for the user, and the server can always run another thread on the processor while it's waiting.


In the LMAX architecture, you would split this operation into two. The first operation would capture the order information and finish by outputting an event (credit card validation requested) to the credit card company. The Business Logic Processor would then carry on processing events for other customers until it received a credit-card-validated event in its input event stream. On processing that event it would carry out the confirmation tasks for that order.


Working in this kind of event-driven, asynchronous style, is somewhat unusual - although using asynchrony to improve the responsiveness of an application is a familiar technique. It also helps the business process be more resilient, as you have to be more explicit in thinking about the different things that can happen with the remote application.


A second feature of the programming model lies in error handling. The traditional model of sessions and database transactions provides a helpful error handling capability. Should anything go wrong, it's easy to throw away everything that happened so far in the interaction. Session data is transient, and can be discarded, at the cost of some irritation to the user if in the middle of something complicated. If an error occurs on the database side you can rollback the transaction.


LMAX's in-memory structures are persistent across input events, so if there is an error it's important to not leave that memory in an inconsistent state. However there's no automated rollback facility. As a consequence the LMAX team puts a lot of attention into ensuring the input events are fully valid before doing any mutation of the in-memory persistent state. They have found that testing is a key tool in flushing out these kinds of problems before going into production.


Input and Output Disruptors.


Although the business logic occurs in a single thread, there are a number tasks to be done before we can invoke a business object method. The original input for processing comes off the wire in the form of a message, this message needs to be unmarshaled into a form convenient for Business Logic Processor to use. Event Sourcing relies on keeping a durable journal of all the input events, so each input message needs to be journaled onto a durable store. Finally the architecture relies on a cluster of Business Logic Processors, so we have to replicate the input messages across this cluster. Similarly on the output side, the output events need to be marshaled for transmission over the network.


Figure 2: The activities done by the input disruptor (using UML activity diagram notation)


The replicator and journaler involve IO and therefore are relatively slow. After all the central idea of Business Logic Processor is that it avoids doing any IO. Also these three tasks are relatively independent, all of them need to be done before the Business Logic Processor works on a message, but they can done in any order. So unlike with the Business Logic Processor, where each trade changes the market for subsequent trades, there is a natural fit for concurrency.


To handle this concurrency the LMAX team developed a special concurrency component, which they call a Disruptor [11].


The LMAX team have released the source code for the Disruptor with an open source licence.


At a crude level you can think of a Disruptor as a multicast graph of queues where producers put objects on it that are sent to all the consumers for parallel consumption through separate downstream queues. When you look inside you see that this network of queues is really a single data structure - a ring buffer. Each producer and consumer has a sequence counter to indicate which slot in the buffer it's currently working on. Each producer/consumer writes its own sequence counter but can read the others' sequence counters. This way the producer can read the consumers' counters to ensure the slot it wants to write in is available without any locks on the counters. Similarly a consumer can ensure it only processes messages once another consumer is done with it by watching the counters.


Figure 3: The input disruptor coordinates one producer and four consumers.


Output disruptors are similar but they only have two sequential consumers for marshaling and output.[12] Output events are organized into several topics, so that messages can be sent to only the receivers who are interested in them. Each topic has its own disruptor.


The disruptors I've described are used in a style with one producer and multiple consumers, but this isn't a limitation of the design of the disruptor. The disruptor can work with multiple producers too, in this case it still doesn't need locks.[13]


A benefit of the disruptor design is that it makes it easier for consumers to catch up quickly if they run into a problem and fall behind. If the unmarshaler has a problem when processing on slot 15 and returns when the receiver is on slot 31, it can read data from slots 16-30 in one batch to catch up. This batch read of the data from the disruptor makes it easier for lagging consumers to catch up quickly, thus reducing overall latency.


I've described things here, with one each of the journaler, replicator, and unmarshaler - this indeed is what LMAX does. But the design would allow multiple of these components to run. If you ran two journalers then one would take the even slots and the other journaler would take the odd slots. This allows further concurrency of these IO operations should this become necessary.


The ring buffers are large: 20 million slots for input buffer and 4 million slots for each of the output buffers. The sequence counters are 64bit long integers that increase monotonically even as the ring slots wrap.[14] The buffer is set to a size that's a power of two so the compiler can do an efficient modulus operation to map from the sequence counter number to the slot number. Like the rest of the system, the disruptors are bounced overnight. This bounce is mainly done to wipe memory so that there is less chance of an expensive garbage collection event during trading. (I also think it's a good habit to regularly restart, so that you rehearse how to do it for emergencies.)


The journaler's job is to store all the events in a durable form, so that they can be replayed should anything go wrong. LMAX does not use a database for this, just the file system. They stream the events onto the disk. In modern terms, mechanical disks are horribly slow for random access, but very fast for streaming - hence the tag-line "disk is the new tape".[15]


Earlier on I mentioned that LMAX runs multiple copies of its system in a cluster to support rapid failover. The replicator keeps these nodes in sync. All communication in LMAX uses IP multicasting, so clients don't need to know which IP address is the master node. Only the master node listens directly to input events and runs a replicator. The replicator broadcasts the input events to the slave nodes. Should the master node go down, it's lack of heartbeat will be noticed, another node becomes master, starts processing input events, and starts its replicator. Each node has its own input disruptor and thus has its own journal and does its own unmarshaling.


Even with IP multicasting, replication is still needed because IP messages can arrive in a different order on different nodes. The master node provides a deterministic sequence for the rest of the processing.


The unmarshaler turns the event data from the wire into a java object that can be used to invoke behavior on the Business Logic Processor. Therefore, unlike the other consumers, it needs to modify the data in the ring buffer so it can store this unmarshaled object. The rule here is that consumers are permitted to write to the ring buffer, but each writable field can only have one parallel consumer that's allowed to write to it. This preserves the principle of only having a single writer. [16]


Figure 4: The LMAX architecture with the disruptors expanded.


The disruptor is a general purpose component that can be used outside of the LMAX system. Usually financial companies are very secretive about their systems, keeping quiet even about items that aren't germane to their business. Not just has LMAX been open about its overall architecture, they have open-sourced the disruptor code - an act that makes me very happy. Not just will this allow other organizations to make use of the disruptor, it will also allow for more testing of its concurrency properties.


Queues and their lack of mechanical sympathy.


The LMAX architecture caught people's attention because it's a very different way of approaching a high performance system to what most people are thinking about. So far I've talked about how it works, but haven't delved too much into why it was developed this way. This tale is interesting in itself, because this architecture didn't just appear. It took a long time of trying more conventional alternatives, and realizing where they were flawed, before the team settled on this one.


Most business systems these days have a core architecture that relies on multiple active sessions coordinated through a transactional database. The LMAX team were familiar with this approach, and confident that it wouldn't work for LMAX. This assessment was founded in the experiences of Betfair - the parent company who set up LMAX. Betfair is a betting site that allows people to bet on sporting events. It handles very high volumes of traffic with a lot of contention - sports bets tend to burst around particular events. To make this work they have one of the hottest database installations around and have had to do many unnatural acts in order to make it work. Based on this experience they knew how difficult it was to maintain Betfair's performance and were sure that this kind of architecture would not work for the very low latency that a trading site would require. As a result they had to find a different approach.


Their initial approach was to follow what so many are saying these days - that to get high performance you need to use explicit concurrency. For this scenario, this means allowing orders to be processed by multiple threads in parallel. However, as is often the case with concurrency, the difficulty comes because these threads have to communicate with each other. Processing an order changes market conditions and these conditions need to be communicated.


The approach they explored early on was the Actor model and its cousin SEDA. The Actor model relies on independent, active objects with their own thread that communicate with each other via queues. Many people find this kind of concurrency model much easier to deal with than trying to do something based on locking primitives.


The team built a prototype exchange using the actor model and did performance tests on it. What they found was that the processors spent more time managing queues than doing the real logic of the application. Queue access was a bottleneck.


When pushing performance like this, it starts to become important to take account of the way modern hardware is constructed. The phrase Martin Thompson likes to use is "mechanical sympathy". The term comes from race car driving and it reflects the driver having an innate feel for the car, so they are able to feel how to get the best out of it. Many programmers, and I confess I fall into this camp, don't have much mechanical sympathy for how programming interacts with hardware. What's worse is that many programmers think they have mechanical sympathy, but it's built on notions of how hardware used to work that are now many years out of date.


One of the dominant factors with modern CPUs that affects latency, is how the CPU interacts with memory. These days going to main memory is a very slow operation in CPU-terms. CPUs have multiple levels of cache, each of which of is significantly faster. So to increase speed you want to get your code and data in those caches.


At one level, the actor model helps here. You can think of an actor as its own object that clusters code and data, which is a natural unit for caching. But actors need to communicate, which they do through queues - and the LMAX team observed that it's the queues that interfere with caching.


The explanation runs like this: in order to put some data on a queue, you need to write to that queue. Similarly, to take data off the queue, you need to write to the queue to perform the removal. This is write contention - more than one client may need to write to the same data structure. To deal with the write contention a queue often uses locks. But if a lock is used, that can cause a context switch to the kernel. When this happens the processor involved is likely to lose the data in its caches.


The conclusion they came to was that to get the best caching behavior, you need a design that has only one core writing to any memory location[17]. Multiple readers are fine, processors often use special high-speed links between their caches. But queues fail the one-writer principle.


This analysis led the LMAX team to a couple of conclusions. Firstly it led to the design of the disruptor, which determinedly follows the single-writer constraint. Secondly it led to idea of exploring the single-threaded business logic approach, asking the question of how fast a single thread can go if it's freed of concurrency management.


The essence of working on a single thread, is to ensure that you have one thread running on one core, the caches warm up, and as much memory access as possible goes to the caches rather than to main memory. This means that both the code and the working set of data needs to be as consistently accessed as possible. Also keeping small objects with code and data together allows them to be swapped between the caches as a unit, simplifying the cache management and again improving performance.


An essential part of the path to the LMAX architecture was the use of performance testing. The consideration and abandonment of an actor-based approach came from building and performance testing a prototype. Similarly much of the steps in improving the performance of the various components were enabled by performance tests. Mechanical sympathy is very valuable - it helps to form hypotheses about what improvements you can make, and guides you to forward steps rather than backward ones - but in the end it's the testing gives you the convincing evidence.


Performance testing in this style, however, is not a well-understood topic. Regularly the LMAX team stresses that coming up with meaningful performance tests is often harder than developing the production code. Again mechanical sympathy is important to developing the right tests. Testing a low level concurrency component is meaningless unless you take into account the caching behavior of the CPU.


One particular lesson is the importance of writing tests against null components to ensure the performance test is fast enough to really measure what real components are doing. Writing fast test code is no easier than writing fast production code and it's too easy to get false results because the test isn't as fast as the component it's trying to measure.


Should you use this architecture?


At first glance, this architecture appears to be for a very small niche. After all the driver that led to it was to be able to run lots of complex transactions with very low latency - most applications don't need to run at 6 million TPS.


But the thing that fascinates me about this application, is that they have ended up with a design which removes much of the programming complexity that plagues many software projects. The traditional model of concurrent sessions surrounding a transactional database isn't free of hassles. There's usually a non-trivial effort that goes into the relationship with the database. Object/relational mapping tools can help much of the pain of dealing with a database, but it doesn't deal with it all. Most performance tuning of enterprise applications involves futzing around with SQL.


These days, you can get more main memory into your servers than us old guys could get as disk space. More and more applications are quite capable of putting all their working set in main memory - thus eliminating a source of both complexity and sluggishness. Event Sourcing provides a way to solve the durability problem for an in-memory system, running everything in a single thread solves the concurrency issue. The LMAX experience suggests that as long as you need less than a few million TPS, you'll have enough performance headroom.


There is a considerable overlap here with the growing interest in CQRS. An event sourced, in-memory processor is a natural choice for the command-side of a CQRS system. (Although the LMAX team does not currently use CQRS.)


So what indicates you shouldn't go down this path? This is always a tricky questions for little-known techniques like this, since the profession needs more time to explore its boundaries. A starting point, however, is to think of the characteristics that encourage the architecture.


One characteristic is that this is a connected domain where processing one transaction always has the potential to change how following ones are processed. With transactions that are more independent of each other, there's less need to coordinate, so using separate processors running in parallel becomes more attractive.


LMAX concentrates on figuring the consequences of how events change the world. Many sites are more about taking an existing store of information and rendering various combinations of that information to as many eyeballs as they can find - eg think of any media site. Here the architectural challenge often centers on getting your caches right.


Another characteristic of LMAX is that this is a backend system, so it's reasonable to consider how applicable it would be for something acting in an interactive mode. Increasingly web application are helping us get used to server systems that react to requests, an aspect that does fit in well with this architecture. Where this architecture goes further than most such systems is its absolute use of asynchronous communications, resulting in the changes to the programming model that I outlined earlier.


These changes will take some getting used to for most teams. Most people tend to think of programming in synchronous terms and are not used to dealing with asynchrony. Yet it's long been true that asynchronous communication is an essential tool for responsiveness. It will be interesting to see if the wider use of asynchronous communication in the javascript world, with AJAX and node. js, will encourage more people to investigate this style. The LMAX team found that while it took a bit of time to adjust to asynchronous style, it soon became natural and often easier. In particular error handling was much easier to deal with under this approach.


The LMAX team certainly feels that the days of the coordinating transactional database are numbered. The fact that you can write software more easily using this kind of architecture and that it runs more quickly removes much of the justification for the traditional central database.


For my part, I find this a very exciting story. Much of my goal is to concentrate on software that models complex domains. An architecture like this provides good separation of concerns, allowing people to focus on Domain-Driven Design and keeping much of the platform complexity well separated. The close coupling between domain objects and databases has always been an irritation - approaches like this suggest a way out.


For articles on similar topics…


…take a look at the following tags:


1: The Free Lunch is Over.


This is the title of a famous essay by Herb Sutter. He describes the "free lunch" as the ever increasing clock speed of processors that regularly gave us more CPU performance every year. His point was that such clock cycle increases were no longer going to happen, instead performance increases would come in terms of multiple cores. But to take advantage of multiple cores, you need software that is capable of working concurrently - so without a shift in programming style people would no longer get the performance lunch for free.


2: I shall remain silent on what I think about the value of this innovation.


3: User Base.


All trading systems need low latency, since one trade can affect later trades and there's a lot of competition based on rapid reaction. Most trading platforms are for professionals - banks, brokers, etc - and typically have hundreds of users. A retail system has the potential for many more users, Betfair has millions of users and LMAX is designed for that scale. (The LMAX team isn't allowed to disclose its actual volumes.)


As it turns out, although a retail system has a lot of users, most of the activity in comes from market makers. During volatile periods an instrument can get hundreds of updates per second, with unusual micro-bursts of hundreds of transactions within a single microsecond.


4: Hardware.


The 6 million TPS benchmark was measured on a 3Ghz dual-socket quad-core Nehalem based Dell server with 32GB RAM.


5: The team does not use the name Business Logic Processor, in fact they have no name for that component, just referring to it as the business logic or core services. I've given it a name to make it easier to talk about in this article.


6: Currently LMAX runs two Business Logic Processors in its main data center and a third at a disaster recovery site. All three process input events.


7: What's in a transaction.


When people talk about transaction timing, one of the problems is what exactly is in a transaction. In some cases it's little more than inserting a new record in a database. LMAX's transactions are reasonably complex, more complex than a typical retail sale.


Placing an order in an exchange involves:


checking the target market is open to take orders checking the order is valid for that market choosing the right matching policy for the type of order sequencing the order so that each order is matched at the best possible price and matched with the right liquidity creating and publicizing the trades made as a consequence of the match updating prices based on the new trades.


8: At this scale of latency, you have to be aware of the garbage collector. For almost all systems these days, a modern GC compaction isn't going to have any noticeable effect on performance. However when you are trying to process millions of transactions per second with minimum jitter, a GC pause becomes a problem. The thing to remember is that short lived objects are ok, as they get collected quickly. So are objects that are permanent, since they will live for ever. The problematic objects are those that will get promoted to an older generation, but will eventually die. As this fragments the older generation region, it will trigger the compaction.


9: I rarely think about which collection implementation to use. This is perfectly reasonable when you're not in performance critical code. Different contexts suggest different behavior.


10: An interesting side-note. While the LMAX team shares much of the current interest in functional programming, they believe that the OO approach provides a better approach for this kind of problem. They've noticed that as they work to write faster code, they move away from a functional style towards OO style. Partly this because of the copying of data that functional styles require to maintain immutability. But it's also because objects provide a better model of a complex domain with a richer choice of data structures.


11: The name "disruptor" was inspired from a couple of sources. One is the the fact that the LMAX team sees this component as something that disrupts current thinking on concurrency. The other is a response to the fact that Java is introducing a phaser, so it's natural to include disruptors too.


12: It would be possible to journal the output events too. This would have the advantage of not needing to recalculate them should they need to be replayed for downstream services. In practice, however, this isn't worthwhile. The business logic is deterministic and very fast, so there's no gain from storing the results.


13: Although it does need to use CAS instructions in this case. See the disruptor technical paper for more information.


14: This does mean that if they process a billion transactions per second the counter will wrap in 292 years, causing some hell to break loose. They have decided that fixing this is not a high priority.


15: SSDs are better at random access, but a disk-like IO system slows them down.


16: Another complication when writing fields is you have to ensure that any fields being written to are separated into different cache lines.


17: Ensuring a single writer to a memory location.


A complication in following the single-writer principle is that processors don't grab memory one location at a time. Rather they sweep up multiple contiguous locations, called a cache line , into cache in one go. Accessing memory in cache line chunks is obviously more efficient, but also means that you have to ensure you don't have locations within that cache line that are written by different cores. So, for example, the Disruptor's sequence counter are padded to ensure they appear in separate cache lines.


Acknowledgments.


Financial institutions are usually secretive with their technical work, usually with little reason. This is a problem as it hampers the ability for the profession to learn from experience. So I'm especially thankful for LMAX's openness in discussing their experiences - both with this article and in their other material.


The main creators of the Disruptor are Martin Thompson, Mike Barker, and Dave Farley.


Martin Thompson and Dave Farley gave me a detailed walk-through of the LMAX architecture that served as the basis for this article. They also responded swiftly to email questions to improve my early drafts.


Concurrent programming is a tricky field that requires lots of attention to be competent at - and I have not put that effort in. As a result I'm entirely dependent upon others for understanding on concurrency and am thankful for their patient advice.


قراءة متعمقة.


If you'd prefer a video description of the LMAX architecture from LMAX team members, your best bet is the QCon presentation given in San Francisco 2018 by Martin Thompson and Michael Barker.


The source code for the Disruptor is available as open source. There is also a good technical paper (pdf) that goes into more depth as well as a collection of blogs and articles on it.


Various members of the LMAX team have their own blogs: Martin Thompson, Michael Barker, and Trisha Gee.


How Trading Systems Function.


Algorithmic automated trading or Algorithmic Trading has been at the centre-stage of the trading world for more than a decade now. The percentage of volumes attributed to algorithmic automated trading has seen a significant rise in the last decade. As a result, it has become a highly competitive market that is heavily dependent on technology. Consequently, the basic architecture of automated trading systems that execute algorithmic strategies has undergone major changes over the past decade and continues to do so. For firms, especially those using high frequency trading systems, it has become a necessity to innovate on technology in order to compete in the world of algorithmic trading, thus, making algorithmic trading field a hotbed for advances in computer and network technologies.


In this post, we will demystify the architecture behind automated trading systems for our readers. We compare the new architecture of automated trading systems with the traditional trading architecture, and understand some of the major components behind these systems.


Traditional Architecture.


Any trading system, conceptually, is nothing more than a computational block that interacts with the exchange on two different streams.


Receives market data Sends order requests and receives replies from the exchange.


The market data that is received typically informs the system of the latest orderbook. It might contain some additional information like the volume traded so far, the last traded price and quantity for a scrip. However, to make a decision on the data, the trader might need to look at old values or derive certain parameters from history. To cater to that, a conventional system would have a historical database to store the market data and tools to use that database. Analysis would also involve a study of the past trades by the trader. Hence another database for storing the trading decisions as well. Last, but not the least, a GUI interface for the trader to view all this information on the screen.


The entire trading system can now be broken down into.


The exchange(s) – the external world The server Market Data receiver Store market data Store orders generated by the user Application Take inputs from the user including the trading decisions Interface for viewing the information including the data and orders An order manager sending orders to the exchange.


New Architecture.


The traditional architecture could not scale up to the needs and demands of Automated trading with DMA. The latency between origin of the event to the order generation went beyond the dimension of human control and entered the realms of milliseconds and microseconds. So the tools to handle market data and its analysis it needed to adapt accordingly. Order management also needs to be more robust and capable of handling many more orders per second. Since the time frame is so small compared to human reaction time, risk management also needs to handle orders in real time and in a completely automated way.


For example, even if the reaction time for an order is 1 millisecond (which is a lot compared to the latencies we see today), the system is still capable of making 1000 trading decisions in a single second. This means each of these 1000 trading decisions needs to go through the Risk management within the same second to reach the exchange. This is just a problem of complexity. Since the architecture now involves automated logic, 100 traders can now be replaced by a single automated trading system. This adds scale to the problem. So each of the logical units generates 1000 orders and 100 such units mean 100,000 orders every second. This means that the decision-making and order sending part needs to be much faster than the market data receiver in order to match the rate of data.


Hence, the level of infrastructure that this module demands would need to be far superior compared to that of a traditional system (discussed in the previous section). Hence the engine which runs the logic of decision making, also known as the ‘Complex Event Processing’ engine, or CEP, moved from within the application to the server. The Application layer, now, is little more than a user interface for viewing and providing parameters to the CEP.


The problem of scaling also leads to an interesting situation. Let us say 100 different logics are being run over a single market data event (as discussed in the earlier example). However there might be common pieces of complex calculations that need to be run for most of the 100 logic units. For example, calculation of greeks for options. If each logic were to function independently, each unit would do the same greek calculation which would unnecessarily use up processor resources. In order to optimize on the redundancy of calculation, complex redundant calculations are typically hived off into a separate calculation engine which provides the greeks as an input to the CEP.


Although the application layer is primarily a view, some of the risk checks (which are now resource hungry operations owing the problem of scale), can be offloaded to the application layer, especially those that are to do with sanity of user inputs like fat finger errors. The rest of the risk checks are performed now by a separate Risk Management System (RMS) within the Order Manager (OM), just before releasing an order. The problem of scale also means that where earlier there were 100 different traders managing their risk, there is now only one RMS system to manage risk across all logical units/strategies. However, some risk checks may be particular to certain strategies and some might need to be done across all strategies. Hence the RMS itself involves, strategy level RMS (SLRMS) and global RMS (GRMS). It might also involve a UI to view the SLRMS and GRMS.


Emergence of protocols for automated trading systems.


With innovations come necessities. Since the new architecture was capable of scaling to many strategies per server, the need to connect to multiple destinations from a single server emerged. So the order manager hosted several adaptors to send orders to multiple destinations and receive data from multiple exchanges. Each adaptor acts as an interpreter between the protocol that is understood by the exchange and the protocol of communication within the system. Multiple exchanges mean multiple adaptors.


However, to add a new exchange to the system, a new adapter has to be designed and plugged into the architecture since each exchange follows its protocol only that is optimized for features that the exchange provides. To avoid this hassle of adapter addition, standard protocols have been designed. The most prominent amongst them is the FIX (Financial Information Exchange) protocol (see our post on introduction to FIX protocol). This not only makes it manageable to connect to different destinations on the fly, but also drastically reduces to the go to market when it comes to connecting with a new destination. For additional reading: Connecting FXCM over FIX, a detailed tutorial.


The presence of standard protocols makes it easy to integrate with third party vendors, for analytics or market data feeds as well. As a result, the market becomes very efficient as integrating with a new destination/vendor is no more a constraint.


In addition, simulation becomes very easy as receiving data from the real market and sending orders to a simulator is just a matter of using the FIX protocol to connect to a simulator. The simulator itself can be built in-house or procured from a third party vendor. Similarly recorded data can just be replayed with the adaptors being agnostic to whether the data is being received from the live market or from a recorded data set.


Emergence of low latency architectures.


With the building blocks of an algorithmic trading system in place, the strategies optimized on the ability to process huge amounts of data in real time and make quick trading decisions. But with the advent of standard communication protocols like FIX, the technology entry barrier to setup an algorithmic trading desk, became lower and hence more competitive. As servers got more memory and higher clock frequencies, the focus shifted towards reducing the latency for decision making. Over time, reducing latency became a necessity for many reasons like:


Strategy makes sense only in a low latency environment Survival of the fittest – competitors pick you off if you are not fast enough.


The problem, however, is that latency is really an overarching term that encompasses several different delays. To quantify all of them in one generic term may not usually make much sense. Although it is very easily understood, it is quite difficult to quantify. It, therefore, becomes increasingly important how the problem of reducing latency is approached.


If we look at the basic life cycle,


A market data packet is published by the exchange The packet travels over the wire The packet arrives at a router on the server side. The router forwards the packet over the network on the server side. The packet arrives on the Ethernet port of the server. Depending whether this is UDP/TCP processing takes place and the packet stripped of its headers and trailers makes its way to the memory of the adaptor. The adaptor then parses the packet and converts it into a format internal to the algorithmic trading platform This packet now travels through the several modules of the system – CEP, tick store, etc. The CEP analyses and sends an order request The order request again goes through the reverse of the cycle as the market data packet.


High latency at any of these steps ensures a high latency for the entire cycle. Hence latency optimization usually starts with the first step in this cycle that is in our control i. e, “the packet travels over the wire”. The easiest thing to do here would be to shorten the distance to the destination by as much as possible. Colocations are facilities provided by exchanges to host the trading server in close proximity to the exchange. The following diagram illustrates the gains that can be made by cutting the distance.


For any kind of a high frequency strategy involving a single destination, Colocation has become a defacto must. However, strategies that involve multiple destinations need some careful planning. Several factors like, the time taken by the destination to reply to order requests and its comparison with the ping time between the two destinations must be considered before making such a decision. The decision may be dependent on the nature of the strategy as well.


Network latency is usually the first step in reducing overall latency of an algorithmic trading system. However there are plenty of other places where the architecture can be optimized.


Propagation latency.


Propagation latency signifies the time taken to send the bits along the wire, constrained by speed of light of course.


Several optimizations have been introduced to reduce the propagation latency apart from reducing the physical distance. For example, estimated roundtrip time for an ordinary cable between Chicago and New York is 13.1 milliseconds. Spread networks, in October 2018, announced latency improvements which brought the estimated roundtrip time to 12.98 milliseconds. Microwave communication was adopted further by firms such as Tradeworx bringing the estimated roundtrip time to 8.5 milliseconds. Note that the theoretical minimum is about 7.5 milliseconds. Continuing innovations are pushing the boundaries of science and fast reaching the theoretical limit of speed of light. Latest developments in laser communication, earlier adopted in defense technologies, has further shaved off an already thinning latency by nanoseconds over short distances.


Network processing latency.


Network processing latency signifies the latency introduced by routers, switches, etc.


The next level of optimization in the architecture of an algorithmic trading system would be in the number of hops that a packet would take to travel from point A to point B. A hop is defined as one portion of the path between source and destination during which a packet doesn’t pass through a physical device like a router or a switch. For example, a packet could travel the same distance via two different paths. But It may have two hops on the first path versus 3 hops on the second. Assuming the propagation delay is the same the routers and switches each introduce their own latency and usually as a thumb rule, more the hops more is the latency added.


Network processing latency may also be affected by what we refer to as microbursts. Microbursts are defined as sudden increase in rate of data transfer which may not necessarily affect the average rate of data transfer. Since algorithmic trading systems are rule based, all such systems will react to the same event in the same way. As a result, a lot of participating systems may send orders leading to a sudden flurry of data transfer between the participants and the destination leading to a microburst. The following diagram represents what a microburst is.


The first figure shows a 1 second view of the data transfer rate. We can see that the average rate is well below the bandwidth available of 1Gbps. However if dive deeper and look at the seconds image (the 5 millisecond view), we see that the transfer rate has spiked above the available bandwidth several times each second. As a result the packet buffers on the network stack, both in the network endpoints and routers and switches may overflow. To avoid this, typically a bandwidth that is much higher than the observed average rate is usually allocated for an algorithmic trading system.


Serialization latency.


Serialization latency signifies the time taken to pull the bits on and off the wire.


A packet size of 1500 bytes transmitted on a T1 line (1,544,000 bps) would produce a serialization delay of about 8 milliseconds. However the same 1500 byte packet using a 56K modem (57344bps) would take 200 milliseconds. A 1G Ethernet line would reduce this latency to about 11 microseconds.


Interrupt latency.


Interrupt latency signifies a latency introduced by interrupts while receiving the packets on a server.


Interrupt latency is defined as the time elapsed between when an interrupt is generated to when the source of the interrupt is serviced. When is an interrupt generated? Interrupts are signals to the processor emitted by hardware or software indicating that an event needs immediate attention. The processor in turn responds by suspending its current activity, saving its state and handling the interrupt. Whenever a packet is received on the NIC, an interrupt is sent to handle the bits that have been loaded into the receive buffer of the NIC. The time taken to respond to this interrupt not only affects the processing of the newly arriving payload, but also the latency of the existing processes on the processor.


Solarflare introduced open onload in 2018, which implements a technique known as kernel bypass, where the processing of the packet is not left to the operating system kernel but to the userspace itself. The entire packet is directly mapped into the user space by the NIC and is processed there. As a result, interrupts are completely avoided.


As a result the rate of processing each packet is accelerated. The following diagram clearly demonstrates the advantages of kernel bypass.


Application latency.


Application latency signifies the time taken by the application to process.


This is dependent on the several packets, the processing allocated to the application logic, the complexity of the calculation involved, programming efficiency etc. Increasing the number of processors on the system would in general reduce the application latency. Same is the case with increased clock frequency. A lot of algorithmic trading systems take advantage of dedicating processor cores to essential elements of the application like the strategy logic for eg. This avoids the latency introduced by the process switching between cores.


Similarly, if the programming of the strategy has been done keep in mind the cache sizes and locality of memory access, then there would be a lot of memory cache hits resulting further reduction of latency. To facilitate this, a lot of system use very low level programming languages to optimize the code to the specific architecture of the processors. Some firms have even gone to the extent of burning complex calculations onto hardware using Fully Programmable Gate Arrays (FPGA). With increasing complexity comes increasing cost and the following diagram aptly illustrates this.


Levels of sophistication.


The world of high frequency algorithmic trading has entered an era of intense competition. With each participant adopting new methods of ousting the competition, technology has progressed by leaps and bounds. Modern day algorithmic trading architectures are quite complex compared to their early stage counterparts. Accordingly, advanced systems are more expensive to build both in terms of time and money.


استنتاج:


This was a detailed post on algorithmic trading system architecture which we are sure gave a very insightful knowledge of the components involved and also of the various challenges that the architecture developers need to handle/overcome in order to build robust automated trading systems.


If you want to learn various aspects of Algorithmic trading then check out the Executive Programme in Algorithmic Trading (EPAT™). The course covers training modules like Statistics & Econometrics, Financial Computing & Technology, and Algorithmic & التداول الكمي. EPAT™ equips you with the required skill sets to build a promising career in algorithmic trading. تسجيل الآن!


الوظائف ذات الصلة:


2 thoughts on “ How Trading Systems Function ”


15 ديسمبر 2017.


وظيفة كبيرة جدا. I simply stumbled upon your blog and wanted to say that I have really enjoyed browsing your weblog posts. After all I’ll be subscribing on your feed and I am hoping you write again very soon!


15 ديسمبر 2017.


We are really glad that you like our posts. Appreciation is what keeps us going.


We make sure to keep on adding fresh content periodically. Do share our posts and help us spread the word about how people can leverage from algorithmic and quantitative trading.


Evolution and Practice: Low-latency Distributed Applications in Finance.


The finance industry has unique demands for low-latency distributed systems.


Andrew Brook.


Virtually all systems have some requirements for latency, defined here as the time required for a system to respond to input. (Non-halting computations exist, but they have few practical applications.) Latency requirements appear in problem domains as diverse as aircraft flight controls (copter. ardupilot/), voice communications (queue. acm/detail. cfm? id=1028895), multiplayer gaming (queue. acm/detail. cfm? id=971591), online advertising (acuityads/real-time-bidding/), and scientific experiments (home. web. cern. ch/about/accelerators/cern-neutrinos-gran-sasso).


Distributed systems—in which computation occurs on multiple networked computers that communicate and coordinate their actions by passing messages—present special latency considerations. In recent years the automation of financial trading has driven requirements for distributed systems with challenging latency requirements (often measured in microseconds or even nanoseconds; see table 1) and global geographic distribution. Automated trading provides a window into the engineering challenges of ever-shrinking latency requirements, which may be useful to software engineers in other fields.


This article focuses on applications where latency (as opposed to throughput, efficiency, or some other metric) is one of the primary design considerations. Phrased differently, "low-latency systems" are those for which latency is the main measure of success and is usually the toughest constraint to design around. The article presents examples of low-latency systems that illustrate the external factors that drive latency and then discusses some practical engineering approaches to building systems that operate at low latency.


Why is everyone in such a hurry?


To understand the impact of latency on an application, it's important first to understand the external, real-world factors that drive the requirement. The following examples from the finance industry illustrate the impact of some real-world factors.


Request for Quote Trading.


In 2003 I worked at a large bank that had just deployed a new Web-based institutional foreign-currency trading system. The quote and trade engine, a J2EE (Java 2 Platform, Enterprise Edition) application running in a WebLogic server on top of an Oracle database, had response times that were reliably under two seconds—fast enough to ensure good user experience.


Around the same time that the bank's Web site went live, a multibank online trading platform was launched. On this new platform, a client would submit an RFQ (request for quote) that would be forwarded to multiple participating banks. Each bank would respond with a quote, and the client would choose which one to accept.


My bank initiated a project to connect to the new multibank platform. The reasoning was that since a two-second response time was good enough for a user on the Web site, it should be good enough for the new platform, and so the same quote and trade engine could be reused. Within weeks of going live, however, the bank was winning a surprisingly small percentage of RFQs. The root cause was latency. When two banks responded with the same price (which happened quite often), the first response was displayed at the top of the list. Most clients waited to see a few different quotes and then clicked on the one at the top of the list. The result was that the fastest bank often won the client's business—and my bank wasn't the fastest.


The slowest part of the quote-generation process occurred in the database queries loading customer pricing parameters. Adding a cache to the quote engine and optimizing a few other "hot spots" in the code brought quote latency down to the range of roughly 100 milliseconds. With a faster engine, the bank was able to capture significant market share on the competitive quotation platform—but the market continued to evolve.


Streaming Quotes.


By 2006 a new style of currency trading was becoming popular. Instead of a customer sending a specific request and the bank responding with a quote, customers wanted the banks to send a continuous stream of quotes. This streaming-quotes style of trading was especially popular with certain hedge funds that were developing automated trading strategies—applications that would receive streams of quotes from multiple banks and automatically decide when to trade. In many cases, humans were now out of the loop on both sides of the trade.


To understand this new competitive dynamic, it's important to know how banks compute the rates they charge their clients for foreign-exchange transactions. The largest banks trade currencies with each other in the so-called interbank market. The exchange rates set in that market are the most competitive and form the basis for the rates (plus some markup) that are offered to clients. Every time the interbank rate changes, each bank recomputes and republishes the corresponding client rate quotes. If a client accepts a quote (i. e., requests to trade against a quoted exchange rate), the bank can immediately execute an offsetting trade with the interbank market, minimizing risk and locking in a small profit. There are, however, risks to banks that are slow to update their quotes. A simple example can illustrate:


Imagine that the interbank spot market for EUR/USD has rates of 1.3558 / 1.3560. (The term spot means that the agreed-upon currencies are to be exchanged within two business days. Currencies can be traded for delivery at any mutually agreed-upon date in the future, but the spot market is the most active in terms of number of trades.) Two rates are quoted: one for buying (the bid rate), and one for selling (the offered or ask rate). In this case, a participant in the interbank market could sell one euro and receive 1.3558 US dollars in return. Conversely, one could buy one euro for a price of 1.3560 US dollars.


Say that two banks, A and B, are participants in the interbank market and are publishing quotes to the same hedge fund client, C. Both banks add a margin of 0.0001 to the exchange rates they quote to their clients—so both publish quotes of 1.3557 / 1.3561 to client C. Bank A, however, is faster at updating its quotes than bank B, taking about 50 milliseconds while bank B takes about 250 milliseconds. There are approximately 50 milliseconds of network latency between banks A and B and their mutual client C. Both banks A and B take about 10 milliseconds to acknowledge an order, while the hedge fund C takes about 10 milliseconds to evaluate new quotes and submit orders. Table 2 breaks down the sequence of events.


The net effect of this new streaming-quote style of trading was that any bank that was significantly slower than its rivals was likely to suffer losses when market prices changed and its quotes weren't updated quickly enough. At the same time, those banks that could update their quotes fastest made significant profits. Latency was no longer just a factor in operational efficiency or market share—it directly impacted the profit and loss of the trading desk. As the volume and speed of trading increased throughout the mid-2000s, these profits and losses grew to be quite large. (How low can you go? Table 3 shows some examples of approximate latencies of systems and applications across nine orders of magnitude.)


To improve its latency, my bank split its quote and trading engine into distinct applications and rewrote the quote engine in C++. The small delays added by each hop in the network from the interbank market to the bank and onward to its clients were now significant, so the bank upgraded firewalls and procured dedicated telecom circuits. Network upgrades combined with the faster quote engine brought end-to-end quote latency down below 10 milliseconds for clients who were physically located close to our facilities in New York, London, or Hong Kong. Trading performance and profits rose accordingly—but, of course, the market kept evolving.


Engineering systems for low latency.


The latency requirements of a given application can be addressed in many ways, and each problem requires a different solution. There are some common themes, though. First, it is usually necessary to measure latency before it can be improved. Second, optimization often requires looking below abstraction layers and adapting to the reality of the physical infrastructure. Finally, it is sometimes possible to restructure the algorithms (or even the problem definition itself) to achieve low latency.


Lies, damn lies, and statistics.


The first step to solving most optimization problems (not just those that involve software) is to measure the current system's performance. Start from the highest level and measure the end-to-end latency. Then measure the latency of each component or processing stage. If any stage is taking an unusually large portion of the latency, then break it down further and measure the latency of its substages. The goal is to find the parts of the system that contribute the most to the total latency and focus optimization efforts there. This is not always straightforward in practice, however.


For example, imagine an application that responds to customer quote requests received over a network. The client sends 100 quote requests in quick succession (the next request is sent as soon as the prior response is received) and reports total elapsed time of 360 milliseconds—or 3.6 milliseconds on average to service a request. The internals of the application are broken down and measured using the same 100-quote test set:


والثور؛ Read input message from network and parse - 5 microseconds.


والثور؛ Look up client profile - 3.2 milliseconds (3,200 microseconds)


والثور؛ Compute client quote - 15 microseconds.


والثور؛ Log quote - 20 microseconds.


والثور؛ Serialize quote to a response message - 5 microseconds.


والثور؛ Write to network - 5 microseconds.


As clearly shown in this example, significantly reducing latency means addressing the time it takes to look up the client's profile. A quick inspection shows that the client profile is loaded from a database and cached locally. Further testing shows that when the profile is in the local cache (a simple hash table), response time is usually under a microsecond, but when the cache is missed it takes several hundred milliseconds to load the profile. The average of 3.2 milliseconds was almost entirely the result of one very slow response (of about 320 milliseconds) caused by a cache miss. Likewise, the client's reported 3.6-millisecond average response time turns out to be a single very slow response (350 milliseconds) and 99 fast responses that took around 100 microseconds each.


Means and outliers.


Most systems exhibit some variance in latency from one event to the next. In some cases the variance (and especially the highest-latency outliers) drives the design, much more so than the average case. It is important to understand which statistical measure of latency is appropriate to the specific problem. For example, if you are building a trading system that earns small profits when the latency is below some threshold but incurs massive losses when latency exceeds that threshold, then you should be measuring the peak latency (or, alternatively, the percentage of requests that exceed the threshold) rather than the mean. On the other hand, if the value of the system is more or less inversely proportional to the latency, then measuring (and optimizing) the average latency makes more sense even if it means there are some large outliers.


What are you measuring?


Astute readers may have noticed that the latency measured inside the quote server application doesn't quite add up to the latency reported by the client application. That is most likely because they aren't actually measuring the same thing. Consider the following simplified pseudocode:


(In the client application)


for (int i = 0; i < 100; i++)


RequestMessage requestMessage = new RequestMessage(quoteRequest);


long sentTime = getSystemTime();


ResponseMessage responseMessage = receiveMessage();


long quoteLatency = getSystemTime() - sentTime;


(In the quote server application)


RequestMessage requestMessage = receive();


long receivedTime = getSystemTime();


QuoteRequest quoteRequest = parseRequest(requestMessage);


long parseTime = getSystemTime();


long parseLatency = parseTime - receivedTime;


ClientProfile profile = lookupClientProfile(quoteRequest. client);


long profileTime = getSystemTime();


long profileLatency = profileTime - parseTime;


Quote quote = computeQuote(profile);


long computeTime = getSystemTime();


long computeLatency = computeTime - profileTime;


long logTime = getSystemTime();


long logLatency = logTime - computeTime;


QuoteMessage quoteMessage = new QuoteMessage(quote);


long serializeTime = getSystemTime();


long serializationLatency = serializeTime - logTime;


long sentTime = getSystemTime();


long sendLatency = sentTime - serializeTime;


logStats(parseLatency, profileLatency, computeLatency,


logLatency, serializationLatency, sendLatency);


Note that the elapsed time measured by the client application includes the time to transmit the request over the network, as well as the time for the response to be transmitted back. The quote server, on the other hand, measures the time elapsed only from the arrival of the quote to when it is sent (or more precisely, when the send method returns). The 350-microsecond discrepancy between the average response time measured by the client and the equivalent measurement by the quote server could be caused by the network, but it might also be the result of delays within the client or server. Moreover, depending on the programming language and operating system, checking the system clock and logging the latency statistics may introduce material delays.


This approach is simplistic, but when combined with code-profiling tools to find the most commonly executed code and resource contention, it is usually good enough to identify the first (and often easiest) targets for latency optimization. It's important to keep this limitation in mind, though.


Measuring distributed systems latency via network traffic capture.


Distributed systems pose some additional challenges to latency measurement—as well as some opportunities. In cases where the system is distributed across multiple servers it can be hard to correlate timestamps of related events. The network itself can be a significant contributor to the latency of the system. Messaging middleware and the networking stacks of operating systems can be complex sources of latency.


At the same time, the decomposition of the overall system into separate processes running on independent servers can make it easier to measure certain interactions accurately between components of the system over the network. Many network devices (such as switches and routers) provide mechanisms for making timestamped copies of the data that traverse the device with minimal impact on the performance of the device. Most operating systems provide similar capabilities in software, albeit with a somewhat higher risk of delaying the actual traffic. Timestamped network-traffic captures (often called packet captures ) can be a useful tool to measure more precisely when a message was exchanged between two parts of the system. These measurements can be obtained without modifying the application itself and generally with very little impact on the performance of the system as a whole. (See wireshark and tcpdump.)


One of the challenges of measuring performance at short time scales across distributed systems is clock synchronization. In general, to measure the time elapsed from when an application on server A transmits a message to when the message reaches a second application on server B, it is necessary to check the time on A's clock when the message is sent and on B's clock when the message arrives, and then subtract those two timestamps to determine the latency. If the clocks on A and B are not in sync, then the computed latency will actually be the real latency plus the clock skew between A and B.


When is this a problem in the real world? Real-world drift rates for the quartz oscillators that are used in most commodity server motherboards are on the order of 10^-5, which means that the oscillator may be expected to drift by 10 microseconds each second. If uncorrected, it may gain or lose as much as a second over the course of a day. For systems operating at time scales of milliseconds or less, clock skew may render the measured latency meaningless. Oscillators with significantly lower drift rates are available, but without some form of synchronization, they will eventually drift apart. Some mechanism is needed to bring each server's local clock into alignment with some common reference time.


Developers of distributed systems should understand NTP (Network Time Protocol) at a minimum and are encouraged to learn about PTP (Precision Time Protocol) and usage of external signals such as GPS to obtain high-accuracy time synchronization in practice. Those who need time accuracy at the sub-microsecond scale will want to become familiar with hardware implementations of PTP (especially at the network interface) as well as tools for extracting time information from each core's local clock. (See tools. ietf/html/rfc1305, tools. ietf/html/rfc5905, nist. gov/el/isd/ieee/ieee1588.cfm, and queue. acm/detail. cfm? id=2354406.)


Abstraction versus Reality.


Modern software engineering is built upon abstractions that allow programmers to manage the complexity of ever-larger systems. Abstractions do this by simplifying or generalizing some aspect of the underlying system. This doesn't come for free, though—simplification is an inherently lossy process and some of the lost details may be important. Moreover, abstractions are often defined in terms of function rather than performance.


Somewhere deep below an application are electrical currents flowing through semiconductors and pulses of light traveling down fibers. Programmers rarely need to think of their systems in these terms, but if their conceptualized view drifts too far from reality they are likely to experience unpleasant surprises.


Four examples illustrate this point:


والثور؛ TCP provides a useful abstraction over UDP (User Datagram Protocol) in terms of delivery of a sequence of bytes. TCP ensures that bytes will be delivered in the order they were sent even if some of the underlying UDP datagrams are lost. The transmission latency of each byte (the time from when it is written to a TCP socket in the sending application until it is read from the corresponding receiving application's socket) is not guaranteed, however. In certain cases (specifically when an intervening datagram is lost) the data contained in a given UDP datagram may be delayed significantly from delivery to the application, while the missed data ahead of it is recovered.


والثور؛ Cloud hosting provides virtual servers that can be created on demand without precise control over the location of the hardware. An application or administrator can create a new virtual server "on the cloud" in less than a minute—an impossible feat when assembling and installing physical hardware in a data center. Unlike the physical server, however, the location of the cloud server or its location in the network topology may not be precisely known. If a distributed application depends on the rapid exchange of messages between servers, the physical proximity of those servers may have a significant impact on the overall application performance.


والثور؛ Threads allow developers to decompose a problem into separate sequences of instructions that can be allowed to run concurrently, subject to certain ordering constraints, and that can operate on shared resources (such as memory). This allows developers to take advantage of multicore processors without needing to deal directly with issues of scheduling and core assignment. In some cases, however, the overhead of context switches and passing data between cores can outweigh the advantages gained by concurrency.


والثور؛ Hierarchical storage and cache-coherency protocols allow programmers to write applications that use large amounts of virtual memory (on the order of terabytes in modern commodity servers), while experiencing latencies measured in nanoseconds when requests can be serviced by the closest caches. The abstraction hides the fact that the fastest memory is very limited in capacity (e. g., register files on the order of a few kilobytes), while memory that has been swapped out to disk may incur latencies in the tens of milliseconds.


Each of these abstractions is extremely useful but can have unanticipated consequences for low-latency applications. There are some practical steps to take to identify and mitigate latency issues resulting from these abstractions.


Messaging and Network Protocols.


The near ubiquity of IP-based networks means that regardless of which messaging product is in use, under the covers the data is being transmitted over the network as a series of discrete packets. The performance characteristics of the network and the needs of an application can vary dramatically—so one size almost certainly does not fit all when it comes to messaging middleware for latency-sensitive distributed systems.


There's no substitute for getting under the hood here. For example, if an application runs on a private network (you control the hardware), communications follow a publisher/subscriber model, and the application can tolerate a certain rate of data loss, then raw multicast may offer significant performance gains over any middleware based on TCP. If an application is distributed across very long distances and data order is not important, then a UDP-based protocol may offer advantages in terms of not stalling to resend a missed packet. If TCP-based messaging is being used, then it's worth keeping in mind that many of its parameters (especially buffer sizes, slow start, and Nagle's algorithm) are configurable and the "out-of-the-box" settings are usually optimized for throughput rather than latency (queue. acm/detail. cfm? id=2539132).


The physical constraint that information cannot propagate faster than the speed of light is a very real consideration when dealing with short time scales and/or long distances. The two largest stock exchanges, NASDAQ and NYSE, run their matching engines in data centers in Carteret and Mahwah, New Jersey, respectively. A ray of light takes 185 microseconds to travel the 55.4-km distance between these two locations. Light in a glass fiber with a refractive index of 1.6 and following a slightly longer path (roughly 65 km) takes almost 350 microseconds to make the same one-way trip. Given that the computations involved in trading decisions can now be made on time scales of 10 microseconds or less, signal propagation latency cannot be ignored.


Decomposing a problem into a number of threads that can be executed concurrently can greatly increase performance, especially in multicore systems, but in some cases it may actually be slower than a single-threaded solution.


Specifically, multithreaded code incurs overhead in the following three ways:


والثور؛ When multiple threads operate on the same data, controls are required to ensure that the data remains consistent. This may include acquisition of locks or implementations of read or write barriers. In multicore systems, these concurrency controls require that thread execution is suspended while messages are passed between cores. If a lock is already held by one thread, then other threads seeking that lock will need to wait until the first one is finished. If several threads are frequently accessing the same data, there may be significant contention for locks.


والثور؛ Similarly, when multiple threads operate on the same data, the data itself must be passed between cores. If several threads access the same data but each performs only a few computations on it, the time required to move the data between cores may exceed the time spent operating on it.


والثور؛ Finally, if there are more threads than cores, the operating system must periodically perform a context switch in which the thread running on a given core is halted, its state is saved, and another thread is allowed to run. The cost of a context switch can be significant. If the number of threads far exceeds the number of cores, context switching can be a significant source of delay.


In general, application design should use threads in a way that represents the inherent concurrency of the underlying problem. If the problem contains significant computation that can be performed in isolation, then a larger number of threads is called for. On the other hand, if there is a high degree of interdependency between computations or (worst case) if the problem is inherently serial, then a single-threaded solution may make more sense. In both cases, profiling tools should be used to identify excessive lock contention or context switching. Lock-free data structures (now available for several programming languages) are another alternative to consider (queue. acm/detail. cfm? id=2492433).


It's also worth noting that the physical arrangement of cores, memory, and I/O may not be uniform. For example, on modern Intel microprocessors certain cores can interact with external I/O (e. g., network interfaces) with much lower latency than others, and exchanging data between certain cores is faster than others. As a result, it may be advantageous explicitly to pin specific threads to specific cores (queue. acm/detail. cfm? id=2513149).


Hierarchical storage and cache misses.


All modern computing systems use hierarchical data storage—a small amount of fast memory combined with multiple levels of larger (but slower) memory. Recently accessed data is cached so that subsequent access is faster. Since most applications exhibit a tendency to access the same memory multiple times in a short period, this can greatly increase performance. To obtain maximum benefit, however, the following three factors should be incorporated into application design:


والثور؛ Using less memory overall (or at least in the parts of the application that are latency-sensitive) increases the probability that needed data will be available in one of the caches. In particular, for especially latency-sensitive applications, designing the app so that frequently accessed data fits within the CPU's caches can significantly improve performance. Specifications vary but Intel's Haswell microprocessors, for example, provide 32 KB per core for L1 data cache and up to 40 MB of shared L3 cache for the entire CPU.


والثور؛ Repeated allocation and release of memory should be avoided if reuse is possible. An object or data structure that is allocated once and reused has a much greater chance of being present in a cache than one that is repeatedly allocated anew. This is especially true when developing in environments where memory is managed automatically, as overhead caused by garbage collection of memory that is released can be significant.


والثور؛ The layout of data structures in memory can have a significant impact on performance because of the architecture of caches in modern processors. While the details vary by platform and are outside the scope of this article, it is generally a good idea to prefer arrays as data structures over linked lists and trees and to prefer algorithms that access memory sequentially since these allow the hardware prefetcher (which attempts to load data preemptively from main memory into cache before it is requested by the application) to operate most efficiently. Note also that data that will be operated on concurrently by different cores should be structured so that it is unlikely to fall in the same cache line (the latest Intel CPUs use 64-byte cache lines) to avoid cache-coherency contention.


A note on premature optimization.


The optimizations just presented should be considered part of a broader design process that takes into account other important objectives including functional correctness, maintainability, etc. Keep in mind Knuth's quote about premature optimization being the root of all evil; even in the most performance-sensitive environments, it is rare that a programmer should be concerned with determining the correct number of threads or the optimal data structure until empirical measurements indicate that a specific part of the application is a hot spot. The focus instead should be on ensuring that performance requirements are understood early in the design process and that the system architecture is sufficiently decomposable to allow detailed measurement of latency when and as optimization becomes necessary. Moreover (and as discussed in the next section), the most useful optimizations may not be in the application code at all.


Changes in Design.


The optimizations presented so far have been limited to improving the performance of a system for a given set of functional requirements. There may also be opportunities to change the broader design of the system or even to change the functional requirements of the system in a way that still meets the overall objectives but significantly improves performance. Latency optimization is no exception. In particular, there are often opportunities to trade reduced efficiency for improved latency.


Three real-world examples of design tradeoffs between efficiency and latency are presented here, followed by an example where the requirements themselves present the best opportunity for redesign.


In certain cases trading efficiency for latency may be possible, especially in systems that operate well below their peak capacity. In particular, it may be advantageous to compute possible outputs in advance, especially when the system is idle most of the time but must react quickly when an input arrives.


A real-world example can be found in the systems used by some firms to trade stocks based on news such as earnings announcements. Imagine that the market expects Apple to earn between $9.45 and $12.51 per share. The goal of the trading system, upon receiving Apple's actual earnings, would be to sell some number of shares Apple stock if the earnings were below $9.45, buy some number of shares if the earnings were above $12.51, and do nothing if the earnings fall within the expected range. The act of buying or selling stocks begins with submitting an order to the exchange. The order consists of (among other things) an indicator of whether the client wishes to buy or sell, the identifier of the stock to buy or sell, the number of shares desired, and the price at which the client wishes to buy or sell. Throughout the afternoon leading up to Apple's announcement, the client would receive a steady stream of market-data messages that indicate the current price at which Apple's stock is trading.


A conventional implementation of this trading system would cache the market-price data and, upon receipt of the earnings data, decide whether to buy or sell (or neither), construct an order, and serialize that order to an array of bytes to be placed into the payload of a message and sent to the exchange.


An alternative implementation performs most of the same steps but does so on every market-data update rather than only upon receipt of the earnings data. Specifically, when each market-data update message is received, the application constructs two new orders (one to buy, one to sell) at the current prices and serializes each order into a message. The messages are cached but not sent. When the next market-data update arrives, the old order messages are discarded and new ones are created. When the earnings data arrives, the application simply decides which (if either) of the order messages to send.


The first implementation is clearly more efficient (it has a lot less wasted computation), but at the moment when latency matters most (i. e., when the earnings data has been received), the second algorithm is able to send out the appropriate order message sooner. Note that this example presents application-level precomputation; there is an analogous process of branch prediction that takes place in pipelined processors which can also be optimized (via guided profiling) but is outside the scope of this article.


Keeping the system warm.


In some low-latency systems long delays may occur between inputs. During these idle periods, the system may grow "cold." Critical instructions and data may be evicted from caches (costing hundreds of nanoseconds to reload), threads that would process the latency-sensitive input are context-switched out (costing tens of microseconds to resume), CPUs may switch into power-saving states (costing a few milliseconds to exit), etc. Each of these steps makes sense from an efficiency standpoint (why run a CPU at full power when nothing is happening?), but all of them impose latency penalties when the input data arrives.


In cases where the system may go for hours or days between input events there is a potential operational issue as well: configuration or environmental changes may have "broken" the system in some important way that won't be discovered until the event occurs—when it's too late to fix.


A common solution to both problems is to generate a continuous stream of dummy input data to keep the system "warm." The dummy data needs to be as realistic as possible to ensure that it keeps the right data in the caches and that breaking changes to the environment are detected. The dummy data needs to be reliably distinguishable from legitimate data, though, to prevent downstream systems or clients from being confused.


It is common in many systems to process the same data through multiple independent instances of the system in parallel, primarily for the improved resiliency that is conferred. If some component fails, the user will still receive the result needed. Low-latency systems gain the same resiliency benefits of parallel, redundant processing but can also use this approach to reduce certain kinds of variable latency.


All real-world computational processes of nontrivial complexity have some variance in latency even when the input data is the same. These variations can be caused by minute differences in thread scheduling, explicitly randomized behaviors such as Ethernet's exponential back-off algorithm, or other unpredictable factors. Some of these variations can be quite large: page faults, garbage collections, network congestion, etc., can all cause occasional delays that are several orders of magnitude larger than the typical processing latency for the same input.


Running multiple, independent instances of the system, combined with a protocol that allows the end recipient to accept the first result produced and discard subsequent redundant copies, both provides the benefit of less-frequent outages and avoids some of the larger delays.


Stream processing and short circuits.


Consider a news analytics system whose requirements are understood to be "build an application that can extract corporate earnings data from a press release document as quickly as possible." Separately, it was specified that the press releases would be pushed to the system via FTP. The system was thus designed as two applications: one that received the document via FTP, and a second that parsed the document and extracted the earnings data. In the first version of this system, an open-source FTP server was used as the first application, and the second application (the parser) assumed that it would receive a fully formed document as input, so it did not start parsing the document until it had fully arrived.


Measuring the performance of the system showed that while parsing was typically completed in just a few milliseconds, receiving the document via FTP could take tens of milliseconds from the arrival of the first packet to the arrival of the last packet. Moreover, the earnings data was often present in the first paragraph of the document.


In a multistep process it may be possible for subsequent stages to start processing before prior stages have finished, sometimes referred to as stream-oriented or pipelined processing . This can be especially useful if the output can be computed from a partial input. Taking this into account, the developers reconceived their overall objective as "build a system that can deliver earnings data to the client as quickly as possible." This broader objective, combined with the understanding that the press release would arrive via FTP and that it was possible to extract the earnings data from the first part of the document (i. e., before the rest of the document had arrived), led to a redesign of the system.


The FTP server was rewritten to forward portions of the document to the parser as they arrived rather than wait for the entire document. Likewise, the parser was rewritten to operate on a stream of incoming data rather than on a single document. The result was that in many cases the earnings data could be extracted within just a few milliseconds of the start of the arrival of the document. This reduced overall latency (as observed by the client) by several tens of milliseconds without the internal implementation of the parsing algorithm being any faster.


استنتاج.


While latency requirements are common to a wide array of software applications, the financial trading industry and the segment of the news media that supplies it with data have an especially competitive ecosystem that produces challenging demands for low-latency distributed systems.


As with most engineering problems, building effective low-latency distributed systems starts with having a clear understanding of the problem. The next step is measuring actual performance and then, where necessary, making improvements. In this domain, improvements often require some combination of digging below the surface of common software abstractions and trading some degree of efficiency for improved latency.


LOVE IT, HATE IT? LET US KNOW.


Andrew Brook is the CTO of Selerity, a provider of realtime news, data, and content analytics. Previously he led development of electronic currency trading systems at two large investment banks and launched a pre-dot-com startup to deliver AI-powered scheduling software to agile manufacturers. His expertise lies in applying distributed, realtime systems technology and data science to real-world business problems. He finds Wireshark to be more interesting than PowerPoint.


&نسخ؛ 2018 ACM 1542-7730/14/0300 $10.00.


An apostate's opinion.


Ivan Beschastnikh, Patty Wang, Yuriy Brun, Michael D, Ernst - Debugging Distributed Systems.


Challenges and options for validation and debugging.


The accepted wisdom does not always hold true.


Lunchtime doubly so. - Ford Prefect to Arthur Dent in "The Hitchhiker's Guide to the Galaxy", by Douglas Adams.


Elios | Sat, 07 Nov 2018 09:29:52 UTC.


Thanks for the nice post. That's a great sum-up of problems in the design and implementation of distributed low latency systems.


I'm working on a distributed low-latency market data distribution system. In this system, one of the biggest challenge is how to measure its latency which is supposed to be several micro seconds.


In our previous system, the latency is measured in an end-to-end manner. We take timestamp in milli seconds on both publisher and subscriber side and record the difference between them. This works but we are aware that the result is not accurate because even with servers having clock synchronized with NTP, users complain sometimes that negative latency is observed.


Given we are reducing the latency to micro seconds, the end-to-end measurement seems to be too limited (it should be better with PTP but we can't force our users to support PTP in their infrastructure) and thus we are trying to get a round-trip latency. However, I can see immediately several cons with this method :


- extra complexity to configure and implement the system because we need to ensure two-way communication. - we can't deduce the end-to-end latency from the round trip one because the loads on both direction are not the same. (we want to send only some probes and get them back)


Do you have some experiences on the round-trip latency measurement and if so could you please share some best practices ?

Comments

Popular Posts