<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Matthieu Bagory]]></title><description><![CDATA[Software Lead Developer]]></description><link>https://matthieu.bagory.com/</link><generator>Ghost 5.33</generator><lastBuildDate>Tue, 14 Apr 2026 22:13:56 GMT</lastBuildDate><atom:link href="https://matthieu.bagory.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Video-based ML identification of medicine boxes]]></title><description><![CDATA[<p>Meditect is a company that provides medical software for pharmacists and patients in West Africa. We originally developed a traceability solution, and lately, a business management software for pharmacies.</p><p>One of the most basic uses is to quickly identify a product. Whether for a pharmacy point of sale or for</p>]]></description><link>https://matthieu.bagory.com/meditect/</link><guid isPermaLink="false">63dac4084dff15089d1a1ae5</guid><category><![CDATA[Code]]></category><dc:creator><![CDATA[Matthieu Bagory]]></dc:creator><pubDate>Sun, 30 Jan 2022 20:11:27 GMT</pubDate><media:content url="https://matthieu.bagory.com/content/images/2022/01/solidaire-3.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://matthieu.bagory.com/content/images/2022/01/solidaire-3.jpeg" alt="Video-based ML identification of medicine boxes"><p>Meditect is a company that provides medical software for pharmacists and patients in West Africa. We originally developed a traceability solution, and lately, a business management software for pharmacies.</p><p>One of the most basic uses is to quickly identify a product. Whether for a pharmacy point of sale or for a patient to obtain medical information.</p><p>Most of the time, a barcode does its job pretty well. But the West African market has the particularity that roughly 1/3 of products don&apos;t have any. Pharmacists work around the problem by sticking barcode labels supplied by their wholesalers. But patients or customs agencies still have to deal with it, especially for counterfeit products, which by definition are mostly sold outside of pharmacies. And if you can always do a text search, it remains slow and error-prone.</p><h2 id="value-proposition">Value proposition </h2><p>We hypothesized that Machine Learning could keep a simple promise: identify a product by holding a medicine box in front of a smartphone camera. Since you can already recognize a plant or a bottle of wine, the promise seemed reachable to us.</p><p>Our context has several particularities:</p><ul><li>Product differences could be subtle. For instance, it can only be a different active ingredient dosage (500 mg vs 1000 mg)</li><li>Products usually have both an English and a French side</li><li>Dozens of new products are released to the market every month</li><li>There is simply no unique dataset for all existing medicines, in the world as in West Africa</li><li>Mobile networks are slow and unreliable. Better bet on on-device inferences (<a href="https://developers.google.com/ml-kit">MLKit</a> is great) than waiting for a server-side remote processing</li><li>Devices are mostly entry-level models with low-end cameras, sometimes even without autofocus</li></ul><h2 id="facing-the-unknown">Facing the unknown</h2><p>The basic approach for identification is to use multi-class classification. You end up with a probability distribution across all your classes used during training. But what if you now have a new product and need to add its class to your classifier? You have no choice but to retrain your entire model.</p><p>An alternative is to use fingerprints. The idea is to generate a vector from an image, being the most specific of a product regardless of noise (orientation, background, lighting, etc.). To identify a product in an image, we compare its vector to a list of reference vectors for all existing products. Such a comparison is performed using cosine distance: the closest value to 1 (same vector) corresponds to the identified product <a href="https://medium.com/@sorenlind/nearest-neighbors-with-keras-and-coreml-755e76fedf36">https://medium.com/@sorenlind/nearest-neighbors-with-keras-and-coreml-755e76fedf36</a>. And reference vectors are previously calculated with images from a basic desktop scanner, for every product sold in the West African market.</p><figure class="kg-card kg-image-card"><img src="https://matthieu.bagory.com/content/images/2023/02/image-1.png" class="kg-image" alt="Video-based ML identification of medicine boxes" loading="lazy" width="1400" height="899" srcset="https://matthieu.bagory.com/content/images/size/w600/2023/02/image-1.png 600w, https://matthieu.bagory.com/content/images/size/w1000/2023/02/image-1.png 1000w, https://matthieu.bagory.com/content/images/2023/02/image-1.png 1400w" sizes="(min-width: 720px) 720px"></figure><p>Such a classifier has the great advantage of being easily extendable: for any new product, you only need to generate a new fingerprint from a reference image and add it to the existing list.</p><h2 id="small-is-beautiful">Small is beautiful</h2><p>To get a good fingerprint, the idea is to generalize what you want to identify. And neural networks are great for that.</p><p><a href="https://en.wikipedia.org/wiki/Convolutional_neural_network">Convolutional Neural Network (CNN)</a> is trendy in image recognition, and more generally in domains where the convolution takes advantage of the local context (text or speech recognition, for instance). They, however, have a lot of parameters, and are resource-intensive to train, since your dataset size depends roughly on your model complexity. A classic trick is to use transfer learning by scrapping the last classification layer of a free model (Inception, VGG, ResNet, etc) already trained on a lot of everyday objects and animals. It&apos;s ok to detect pangolin based on cat and dog training, but not so much for medicine boxes, with specific topology (surfaces, edges, and corners), texts of different sizes, background colors, brand logos, etc.</p><p>Our solution, based on <a href="https://medium.com/@sorenlind/a-deep-convolutional-denoising-autoencoder-for-image-classification-26c777d3b88e">https://medium.com/@sorenlind/a-deep-convolutional-denoising-autoencoder-for-image-classification-26c777d3b88e</a>, was to use a denoising autoencoder. Autoencoders consist of an encoder and a decoder, linked by a latent vector. After training the model to generate the reference noise-free image of a product from every noisy image, you could throw away the decoder and use the latent vector to generate fingerprints: it embeds what a medicine box is, independently of acquisition noises.</p><figure class="kg-card kg-image-card"><img src="https://matthieu.bagory.com/content/images/2022/01/image.png" class="kg-image" alt="Video-based ML identification of medicine boxes" loading="lazy" width="600" height="161" srcset="https://matthieu.bagory.com/content/images/2022/01/image.png 600w"></figure><p>The real trick was to optimize this latent vector size. Too small and you&apos;re not specific enough because you lose information. Too big and you embed noise or sparsity and fall into the <a href="https://en.wikipedia.org/wiki/Curse_of_dimensionality">curse of dimensionality</a>.</p><h2 id="pimp-your-dataset">Pimp your dataset</h2><p>Still, building a dataset from scratch is time-consuming and resource-intensive. Most of the hours are spent on various and tedious data transfers, renaming, selection, and cleaning.</p><p>Generalization is mostly about varying your noise. In our case, it&apos;s mostly image background and box orientation, and a bit of luminosity, reflections, and hand obstruction. Image augmentation (Gaussian noise, rotation, symmetry, etc) can help you a bit, but you can&apos;t ignore reality and the need for real-life backgrounds and orientations.</p><p>We used 2 tricks to speed up the process. First, we took videos instead of still images, while randomly rotating each box in front of the camera. Capture is greatly accelerated, and image sampling from the video could be finely tuned afterward.</p><p>Secondly, we filmed an additional video with a green screen background. A relatively simple post-processing then allowed us to embed as many backgrounds as we wanted.</p><h2 id="trust-in-me">Trust in me</h2><p>So far, we&apos;ve built a model that, for each image of a video stream, generates a list of distances to every product and, therefore, the best candidate. We now have to set a threshold and decide if the product is detected or not.</p><p>There is always a tradeoff in classification between specificity and sensitivity. Too high a threshold and we will see products where there are none (false positives). Too low a threshold and we will miss detection (false negative).</p><p>There is no right or wrong choice. It depends on the use cases and your users&apos; acceptance of false positives and negatives.</p><p>In practice, the classic approach is to use <a href="https://en.wikipedia.org/wiki/Receiver_operating_characteristic">ROC curves</a> and <a href="https://en.wikipedia.org/wiki/Youden&apos;s_J_statistic">Youden&apos;s index</a> to find an optimal threshold value.</p><h2 id="the-best-of-both-worlds">The best of both worlds</h2><p>Ultimately, the goal is to help your users. And most of the time, you want to bundle trust in your answer, not bother them with informed consent.</p><p>Our case, however, is typical of a lot of Machine learning applications: good enough to show some magic, but not enough to be reliable.</p><p>To greatly improve our performance, we built an additional model to run in parallel to our denoising autoencoder classifier. Its only role was to detect medical boxes and act as a first-stage filter before product classification.</p><p>This time, a CNN with transfer learning did the job perfectly. Because detecting medical boxes mostly means discerning them from daily objects and animals.</p><h2 id="if-better-is-possible-good-is-not-enough">If better is possible, good is not enough</h2><p>Together, the 2 models offer an accuracy of around 95%. An honorable performance in ML, especially from a small dataset.</p><p>In the end, we unfold a market reality. First, 5% of false results is still unacceptable for a medicine. Misidentifying a product for a patient or a customer agency could result in a serious health hazard. Secondly, pharmacists systematically work around the problem by using 100% accurate barcode stickers provided by their wholesalers.</p><p>Ultimately, we had to kill the project and focus on pharmacy management software. That being sai,d we learned a lot and proved to ourselves that developing a production-ready solution with limited resources and existing tools was possible.</p>]]></content:encoded></item><item><title><![CDATA[Refuel]]></title><description><![CDATA[<p>Refuel is a fuel delivery service. Originally intended for consumers, it gradually became dedicated to companies and vehicle fleet managers.</p><p>The value proposition is to save time by replacing the trip to a service station with a delivery, even in places that are difficult to access, such as building car</p>]]></description><link>https://matthieu.bagory.com/refuel/</link><guid isPermaLink="false">63dac4084dff15089d1a1adb</guid><category><![CDATA[Code]]></category><dc:creator><![CDATA[Matthieu Bagory]]></dc:creator><pubDate>Sun, 31 Mar 2019 13:32:00 GMT</pubDate><media:content url="https://matthieu.bagory.com/content/images/2021/06/arton6921.png" medium="image"/><content:encoded><![CDATA[<img src="https://matthieu.bagory.com/content/images/2021/06/arton6921.png" alt="Refuel"><p>Refuel is a fuel delivery service. Originally intended for consumers, it gradually became dedicated to companies and vehicle fleet managers.</p><p>The value proposition is to save time by replacing the trip to a service station with a delivery, even in places that are difficult to access, such as building car parks. </p><p>Under the hood, Refuel uses a fleet of utility trucks equipped with a tank and a pump controller, frictionless web and mobile apps to place orders, and algorithms to automate and optimize order dispatching, cost calculation, tanker refueling, &#xA0;ETA tracking, and billing.</p><p>I worked as a freelancer for an entrepreneur, and my role was to design and then implement the whole technical solution. I also led 2 junior developers.</p><!--kg-card-begin: markdown--><pre><code>- Firebase realtime database
- Rest api with Google Cloud Functions (serverless)
- React Native iOS and Android mobile apps
- JAVA module for pump controller
- React.js back-office
- Material design (RN elements and material-UI)
- Payments with Stripe
</code></pre>
<!--kg-card-end: markdown--><p>I solved three main challenges. Firstly, I designed business-savvy algorithms that take into account a large number of operational inputs: labor cost and shifts, equipment depreciation, travel time with traffic, gross fuel pricing and sourcing, etc. Fuel is a commoditized product with low margins, and precise cost calculation is essential for profitability.</p><p>Secondly, I designed an end-to-end pipeline with fraud prevention mechanisms: credit card imprint, redundant control of delivered volume, dispute process through the help desk, driver rating, etc. Fuel is a valuable and easily laundered good, and trust by design is essential, both with employees and customers.</p><p>Thirdly, I implemented a real-time solution, both for tracking truck position and for estimating the time of arrival with traffic. Since Uber, UX expectations are very high.</p><p></p><figure class="kg-card kg-image-card"><img src="https://matthieu.bagory.com/content/images/2021/06/1_4.7-inch-iPhone-7_screen__1.jpg" class="kg-image" alt="Refuel" loading="lazy" width="750" height="1334" srcset="https://matthieu.bagory.com/content/images/size/w600/2021/06/1_4.7-inch-iPhone-7_screen__1.jpg 600w, https://matthieu.bagory.com/content/images/2021/06/1_4.7-inch-iPhone-7_screen__1.jpg 750w" sizes="(min-width: 720px) 720px"></figure>]]></content:encoded></item><item><title><![CDATA[Pleazup]]></title><description><![CDATA[<p>Pleazup is a free social network to share gift ideas tactfully with your family.</p><p>The value proposition is to share a list of gifts that we would like to receive while preserving as much as possible the joy of surprises through anonymous interactions. You can make a suggestion or book</p>]]></description><link>https://matthieu.bagory.com/pleazup/</link><guid isPermaLink="false">63dac4084dff15089d1a1ae3</guid><category><![CDATA[Code]]></category><dc:creator><![CDATA[Matthieu Bagory]]></dc:creator><pubDate>Mon, 01 Jan 2018 16:42:00 GMT</pubDate><media:content url="https://matthieu.bagory.com/content/images/2021/08/box-close-up-gift-842876-e1549448346930.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://matthieu.bagory.com/content/images/2021/08/box-close-up-gift-842876-e1549448346930.jpeg" alt="Pleazup"><p>Pleazup is a free social network to share gift ideas tactfully with your family.</p><p>The value proposition is to share a list of gifts that we would like to receive while preserving as much as possible the joy of surprises through anonymous interactions. You can make a suggestion or book a gift idea without the recipient knowing your identity.</p><p>It&apos;s a Twitter-like social network: everybody can follow anybody without accepting requests. But you&apos;re always notified when followed, and you can block anyone if needed.</p><p>I co-founded the company with Diane Frachon (<a href="https://www.linkedin.com/in/diane-frachon/">https://www.linkedin.com/in/diane-frachon/</a>). The project started in <em>La Cantine,</em> then <em>Numa</em> &#xA0;(<a href="https://www.numa.co/">https://www.numa.co/</a>), back then a free coworking in Paris. We then moved to R&#xE9;union Island and were incubated by the regional startup program.</p><h2 id="revenue">Revenue</h2><p>Pleazup revenue model is based on affiliation. If a gift idea is a merchant product, for instance, a book or consumer appliance, we try to find merchant websites that sell it, and we provide a link to purchase it. If the merchant also offers an affiliate program, we earn a commission after each sale if the user previously clicked on our link during the last month.</p><h2 id="growth">Growth</h2><p>We have 2 strategies to reduce the chicken-and-egg problem that every social network faces: our service is only truly useful when a whole family uses it, but users only want to use it when it&apos;s already useful.</p><p>Firstly, we try to reduce the onboarding friction of building the network. We ask new users to share their email or phone number and automatically match existing users. We also notify them whenever one of their contacts signs up.</p><p>Secondly, we optionally provide a public URL of your gift idea list, shareable to anyone without authentication.</p><h2 id="technology">Technology</h2><p>Social networks share certain technical generalities. For instance, they need complex database queries to answer complex social questions (ideas liked by friends of friends, etc). </p><p>They also need an algorithm to sort their content. Notifications could be displayed chronologically, but suggested content from our &quot;Inspirations&quot; feature requires a recommendation engine, like Facebook EdgeRank (<a href="https://en.wikipedia.org/wiki/EdgeRank">https://en.wikipedia.org/wiki/EdgeRank</a>). We implemented a similar algorithm, weighing freshness, popularity, and monetization. Essentially, a recommendation engine is a search engine without textual queries: what should you see if you don&apos;t know what you want?</p><p>Our affiliation business model also requires matching gift ideas with product pages from affiliate merchant websites. Since textual information from gift ideas is rarely specific enough, we developed an image search engine restricted to merchants&apos; websites.</p><!--kg-card-begin: markdown--><pre><code>- Native iOS (Swift) &amp; Android (Java) mobile apps
- React web app
- Material design system
- Node backend using Parse Plateform (https://parseplatform.org/)
- Heroku hosting
- Search with Algolia
- Recommender engine using Google Vision API
</code></pre>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card"><img src="https://matthieu.bagory.com/content/images/2025/05/Iphone-Inspirations-2.png" class="kg-image" alt="Pleazup" loading="lazy" width="1203" height="2029" srcset="https://matthieu.bagory.com/content/images/size/w600/2025/05/Iphone-Inspirations-2.png 600w, https://matthieu.bagory.com/content/images/size/w1000/2025/05/Iphone-Inspirations-2.png 1000w, https://matthieu.bagory.com/content/images/2025/05/Iphone-Inspirations-2.png 1203w" sizes="(min-width: 720px) 720px"></figure>]]></content:encoded></item><item><title><![CDATA[OuiRun]]></title><description><![CDATA[<p>OuiRun is a mobile app for finding your next running, jogging, or trail partner.</p><p>I worked as a freelancer for a few weeks, and my role was to maintain and develop an existing stack.</p><!--kg-card-begin: markdown--><pre><code>- MongoDB database
- Parse plateform (https://parseplatform.org/)
- Native swift (iOS) and java (Android)</code></pre>]]></description><link>https://matthieu.bagory.com/ouirun/</link><guid isPermaLink="false">63dac4084dff15089d1a1adc</guid><category><![CDATA[Code]]></category><dc:creator><![CDATA[Matthieu Bagory]]></dc:creator><pubDate>Fri, 28 Jul 2017 13:37:00 GMT</pubDate><media:content url="https://matthieu.bagory.com/content/images/2021/06/cover-800x450.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://matthieu.bagory.com/content/images/2021/06/cover-800x450.jpeg" alt="OuiRun"><p>OuiRun is a mobile app for finding your next running, jogging, or trail partner.</p><p>I worked as a freelancer for a few weeks, and my role was to maintain and develop an existing stack.</p><!--kg-card-begin: markdown--><pre><code>- MongoDB database
- Parse plateform (https://parseplatform.org/)
- Native swift (iOS) and java (Android) mobile apps
- Facebook login and api
</code></pre>
<!--kg-card-end: markdown--><p>My main contribution was to migrate the matching mechanism from Tinder-like to Facebook-like friend requests. I also refactored and improved the matching algorithm with social graph information from Facebook API (friends of friends, etc).</p><p>OuiRun is now OuiLive (<a href="https://www.ouilive.co/">https://www.ouilive.co/</a>)</p>]]></content:encoded></item><item><title><![CDATA[SnapMart]]></title><description><![CDATA[<p>SnapMart is a B2B mobile app whose objective is to reduce food waste.</p><p>The service connects restaurants with supermarkets. </p><p>Supermarkets could enter inventories of perishable products, manage discounts, and edit invoices.</p><p>Restaurants could browse nearby perishable products by category, &#xA0;choose among several delivery methods, and pay the supermarket.</p><p>Under</p>]]></description><link>https://matthieu.bagory.com/snapmart/</link><guid isPermaLink="false">63dac4084dff15089d1a1add</guid><category><![CDATA[Code]]></category><dc:creator><![CDATA[Matthieu Bagory]]></dc:creator><pubDate>Fri, 04 Nov 2016 15:01:00 GMT</pubDate><media:content url="https://matthieu.bagory.com/content/images/2021/06/snapmart.png" medium="image"/><content:encoded><![CDATA[<img src="https://matthieu.bagory.com/content/images/2021/06/snapmart.png" alt="SnapMart"><p>SnapMart is a B2B mobile app whose objective is to reduce food waste.</p><p>The service connects restaurants with supermarkets. </p><p>Supermarkets could enter inventories of perishable products, manage discounts, and edit invoices.</p><p>Restaurants could browse nearby perishable products by category, &#xA0;choose among several delivery methods, and pay the supermarket.</p><p>Under the hood, Snap-Mart is the combination of a geographic search algorithm + a multi-modal delivery cost calculation algorithm + a payment marketplace powered by Stripe.</p><p>I worked as a freelancer for an entrepreneur, and my role was to design and implement the whole technical solution: backend, mobile apps, and back-office dashboard.</p><!--kg-card-begin: markdown--><pre><code>- Backend, api and dashboard with Parse (https://parseplatform.org/)
- Heroku hosting
- Native iOS app
- Marketplace with Stripe connect (https://stripe.com/connect)
</code></pre>
<!--kg-card-end: markdown--><p>I solved two main technical challenges. Firstly, the development of a payment marketplace. Stripe simplifies a lot of things, but <a href="https://en.wikipedia.org/wiki/Know_your_customer">KYC</a> still requires advanced UX, and generating invoices, including discounts and delivery, remains complex.</p><p>Secondly, to automate delivery. Courier APIs (<a href="https://stuart.com/">https://stuart.com/</a>) make it easy to order and follow a delivery, but we still have to choose the best transport modality (bike, scooter, or car) based on cost and time.</p><p>The first tests were carried out with a Parisian supermarket and nearby restaurateurs, but the project was stopped due to a lack of commercial success.The first tests were carried out with a Parisian supermarket and nearby restaurateurs, but the project was stopped for lack of commercial success.</p><figure class="kg-card kg-image-card"><img src="https://matthieu.bagory.com/content/images/2021/06/4.7-inch-iPhone-6-Screenshot-4.jpg" class="kg-image" alt="SnapMart" loading="lazy" width="750" height="1334" srcset="https://matthieu.bagory.com/content/images/size/w600/2021/06/4.7-inch-iPhone-6-Screenshot-4.jpg 600w, https://matthieu.bagory.com/content/images/2021/06/4.7-inch-iPhone-6-Screenshot-4.jpg 750w" sizes="(min-width: 720px) 720px"></figure>]]></content:encoded></item><item><title><![CDATA[LIVE for messenger]]></title><description><![CDATA[<p><em>LIVE for Messenger</em> is an iOS mobile app intended to produce videos with text overlay and easily share them on Messenger.</p><p>Overlay text includes automatic context information such as the current timestamp and the reverse geocoded address. A basic text editor also allows you to add custom text, including emoticons</p>]]></description><link>https://matthieu.bagory.com/live-for-messenger/</link><guid isPermaLink="false">63dac4084dff15089d1a1ae4</guid><category><![CDATA[Code]]></category><dc:creator><![CDATA[Matthieu Bagory]]></dc:creator><pubDate>Wed, 01 Jul 2015 06:58:00 GMT</pubDate><media:content url="https://matthieu.bagory.com/content/images/2021/07/11722357_1087384911274903_436392950551279369_o.png" medium="image"/><content:encoded><![CDATA[<img src="https://matthieu.bagory.com/content/images/2021/07/11722357_1087384911274903_436392950551279369_o.png" alt="LIVE for messenger"><p><em>LIVE for Messenger</em> is an iOS mobile app intended to produce videos with text overlay and easily share them on Messenger.</p><p>Overlay text includes automatic context information such as the current timestamp and the reverse geocoded address. A basic text editor also allows you to add custom text, including emoticons</p><p>I worked as a freelancer for an entrepreneur. Starting with a wireframe, my role was to find a technical solution, implement it, and then deploy the app on the Apple App Store.</p><!--kg-card-begin: markdown--><pre><code>- Native iOS app in objective-c
- Facebook login
- Messenger api
</code></pre>
<!--kg-card-end: markdown--><p>Technical challenges included video processing, reverse geocoding, Facebook login, and messenger api integration</p><figure class="kg-card kg-image-card"><img src="https://matthieu.bagory.com/content/images/2021/06/live-1.jpeg" class="kg-image" alt="LIVE for messenger" loading="lazy" width="322" height="572"></figure>]]></content:encoded></item><item><title><![CDATA[PhD in Biomedical Engineering]]></title><description><![CDATA[<p>The objective was to find predictive markers of acquired disability in multiple sclerosis.</p><p>The original title is:</p><blockquote>Methodological development for the absolute and multi-tissue quantification in magnetic resonance spectroscopy of metabolic alterations in multiple sclerosis.</blockquote><p>My work involved several scientific fields:</p><ul><li><strong>data fusion of image processing </strong>(segmentation and registration) and</li></ul>]]></description><link>https://matthieu.bagory.com/phd-in-biomedical-engineering/</link><guid isPermaLink="false">63dac4084dff15089d1a1ade</guid><category><![CDATA[Science]]></category><dc:creator><![CDATA[Matthieu Bagory]]></dc:creator><pubDate>Sat, 29 May 2010 13:50:00 GMT</pubDate><media:content url="https://matthieu.bagory.com/content/images/2021/06/pexels-mart-production-7089013.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://matthieu.bagory.com/content/images/2021/06/pexels-mart-production-7089013.jpg" alt="PhD in Biomedical Engineering"><p>The objective was to find predictive markers of acquired disability in multiple sclerosis.</p><p>The original title is:</p><blockquote>Methodological development for the absolute and multi-tissue quantification in magnetic resonance spectroscopy of metabolic alterations in multiple sclerosis.</blockquote><p>My work involved several scientific fields:</p><ul><li><strong>data fusion of image processing </strong>(segmentation and registration) and <strong>signal processing</strong> (non-linear regression) algorithms to provide spatial measurements of innovative biomarkers from magnetic resonance spectroscopy.</li><li><strong>methodological developments</strong> in a clinical context (calibrations, corrections, variability assessments) to obtain quantitative and reliable data and allow comparisons between patients and examinations</li><li><strong>Biostatistics</strong> to answer medical questions on longitudinal cohorts</li></ul><p>The main achievements were: </p><ul><li>participation in international scientific<strong> congresses</strong> (Hawaii and Montr&#xE9;al)</li><li>3 months as a scientific<strong> visitor at McGill University</strong> (Montr&#xE9;al), in the McConnell Brain Imaging Centre <a href="https://www.mcgill.ca/bic/">https://www.mcgill.ca/bic/</a></li><li>publication of a scientific<strong> article</strong> in the peer-reviewed journal IEEE Transactions on Medical Imaging <a href="https://ieeexplore.ieee.org/document/5951742?reload=true&amp;arnumber=5951742">https://ieeexplore.ieee.org/document/5951742?reload=true&amp;arnumber=5951742</a></li></ul><p>The scientific part of the doctorate was supervised by CREATIS &#xA0;<a href="https://www.creatis.insa-lyon.fr/">https://www.creatis.insa-lyon.fr/</a>, a research lab specialized in medical image processing and MRI acquisition.</p><p>The medical part of the doctorate was supervised by Pr. Confavreux&apos;s team from Lyon neurological hospital, which specialized in multiple sclerosis.</p><p>My daily work took place within CERMEP <a href="https://www.cermep.fr/">https://www.cermep.fr/</a>, a clinical and research medical imaging platform inside the Lyon hospital.</p>]]></content:encoded></item></channel></rss>