Monday, August 1, 2022
HomeSoftware DevelopmentQ2 ‘22 highlights and achievements

Q2 ‘22 highlights and achievements



Posted by Nari Yoon, Hee Jung, DevRel Group Supervisor / Soonson Kwon, DevRel Program Supervisor

Let’s discover highlights and accomplishments of huge Google Machine Studying communities over the second quarter of the 12 months! We’re enthusiastic and grateful about all of the actions by the worldwide community of ML communities. Listed below are the highlights!

TensorFlow/Keras

TFUG Agadir hosted #MLReady part as part of #30DaysOfML. #MLReady aimed to arrange the attendees with the data required to know the several types of issues which deep studying can resolve, and helped attendees be ready for the TensorFlow Certificates.

TFUG Taipei hosted the essential Python and TensorFlow programs named From Python to TensorFlow. The goal of those occasions is to assist everybody be taught concerning the fundamentals of Python and TensorFlow, together with TensorFlow Hub, TensorFlow API. The occasion movies are shared each week by way of Youtube playlist.

TFUG New York hosted Introduction to Neural Radiance Fields for TensorFlow customers. The speak included Quantity Rendering, 3D view synthesis, and hyperlinks to a minimal implementation of NeRF utilizing Keras and TensorFlow. Within the occasion, ML GDE Aritra Roy Gosthipaty (India) had a chat specializing in breaking the ideas of the educational paper, NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis into less complicated and extra ingestible snippets.

TFUG Turkey, GDG Edirne and GDG Mersin organized a TensorFlow Bootcamp 22 and ML GDE M. Yusuf Sarıgöz (Turkey) participated as a speaker, TensorFlow Ecosystem: Get most out of auxiliary packages. Yusuf demonstrated the interior workings of TensorFlow, how variables, tensors and operations work together with one another, and the way auxiliary packages are constructed upon this skeleton.

TFUG Mumbai hosted the June Meetup and 110 of us gathered. ML GDE Sayak Paul (India) and TFUG mentor Darshan Despande shared data by way of classes. And ML workshops for learners went on and contributors constructed up machine studying fashions with out writing a single line of code.

ML GDE Hugo Zanini (Brazil) wrote Realtime SKU detection within the browser utilizing TensorFlow.js. He shared an answer for a widely known downside within the shopper packaged items (CPG) trade: real-time and offline SKU detection utilizing TensorFlow.js.

ML GDE Gad Benram (Portugal) wrote Can a pair TensorFlow traces cut back overfitting? He defined how only a few traces of code can generate knowledge augmentations and increase a mannequin’s efficiency on the validation set.

ML GDE Victor Dibia (USA) wrote Easy methods to Construct An Android App and Combine Tensorflow ML Fashions sharing the best way to run machine studying fashions regionally on Android cell gadgets, Easy methods to Implement Gradient Explanations for a HuggingFace Textual content Classification Mannequin (Tensorflow 2.0) explaining in 5 steps about the best way to confirm the mannequin is specializing in the appropriate tokens to categorise textual content. He additionally wrote the best way to finetune a HuggingFace mannequin for textual content classification, utilizing Tensorflow 2.0.

ML GDE Karthic Rao (India) launched a brand new collection ML for JS builders with TFJS. This collection is a mixture of brief portrait and lengthy panorama movies. You may discover ways to construct a poisonous phrase detector utilizing TensorFlow.js.

ML GDE Sayak Paul (India) applied the DeiT household of ViT fashions, ported the pre-trained params into the implementation, and supplied code for off-the-shelf inference, fine-tuning, visualizing consideration rollout plots, distilling ViT fashions by way of consideration. (code | pretrained mannequin | tutorial)

ML GDE Sayak Paul (India) and ML GDE Aritra Roy Gosthipaty (India) inspected varied phenomena of a Imaginative and prescient Transformer, shared insights from varied related works achieved within the space, and supplied concise implementations which are appropriate with Keras fashions. They supply instruments to probe into the representations realized by completely different households of Imaginative and prescient Transformers. (tutorial | code)

JAX/Flax

ML GDE Aakash Nain (India) had a particular speak, Introduction to JAX for ML GDEs, TFUG organizers and ML neighborhood community organizers. He coated the basics of JAX/Flax in order that an increasing number of folks check out JAX within the close to future.

ML GDE Seunghyun Lee (Korea) began a undertaking, Coaching and Lightweighting Cookbook in JAX/FLAX. This undertaking makes an attempt to construct a neural community coaching and lightweighting cookbook together with three sorts of lightweighting options, i.e., data distillation, filter pruning, and quantization.

ML GDE Yucheng Wang (China) wrote Historical past and options of JAX and defined the distinction between JAX and Tensorflow.

ML GDE Martin Andrews (Singapore) shared a video, Sensible JAX : Utilizing Hugging Face BERT on TPUs. He reviewed the Hugging Face BERT code, written in JAX/Flax, being fine-tuned on Google’s Colab utilizing Google TPUs. (Pocket book for the video)

ML GDE Soumik Rakshit (India) wrote Implementing NeRF in JAX. He makes an attempt to create a minimal implementation of 3D volumetric rendering of scenes represented by Neural Radiance Fields.

Kaggle

ML GDEs’ Kaggle notebooks had been introduced because the winner of Google OSS Professional Prize on Kaggle: Sayak Paul and Aritra Roy Gosthipaty’s Masked Picture Modeling with Autoencoders in March; Sayak Paul’s Distilling Imaginative and prescient Transformers in April; Sayak Paul & Aritra Roy Gosthipaty’s Investigating Imaginative and prescient Transformer Representations; Soumik Rakshit’s Tensorflow Implementation of Zero-Reference Deep Curve Estimation in Might and Aakash Nain’s The Definitive Information to Augmentation in TensorFlow and JAX in June.

ML GDE Luca Massaron (Italy) printed The Kaggle E book with Konrad Banachewicz. This e book particulars competitors evaluation, pattern code, end-to-end pipelines, finest practices, and suggestions & tips. And in the web occasion, Luca and the co-author talked about the best way to compete on Kaggle.

ML GDE Ertuğrul Demir (Turkey) wrote Kaggle Handbook: Fundamentals to Survive a Kaggle Shake-up protecting bias-variance tradeoff, validation set, and cross validation method. Within the second publish of the collection, he confirmed extra methods utilizing analogies and case research.

TFUG Chennai hosted ML Examine Jam with Kaggle and created examine teams for the contributors. Greater than 60% of members had been lively throughout the entire program and lots of of them shared their completion certificates.

TFUG Mysuru organizer Usha Rengaraju shared a Kaggle pocket book which accommodates the implementation of the analysis paper: UNETR – Transformers for 3D Biomedical Picture Segmentation. The mannequin routinely segments the abdomen and intestines on MRI scans.

TFX

ML GDE Sayak Paul (India) and ML GDE Chansung Park (Korea) shared the best way to deploy a deep studying mannequin with Docker, Kubernetes, and Github actions, with two promising methods – FastAPI (for REST) and TF Serving (for gRPC).

ML GDE Ukjae Jeong (Korea) and ML Engineers at Karrot Market, a cell commerce unicorn with 23M customers, wrote Why Karrot Makes use of TFX, and Easy methods to Enhance Productiveness on ML Pipeline Growth.

ML GDE Jun Jiang (China) had a speak introducing the idea of MLOps, the production-level end-to-end options of Google & TensorFlow, and the best way to use TFX to construct the search and suggestion system & scientific analysis platform for large-scale machine studying coaching.

ML GDE Piero Esposito (Brazil) wrote Constructing Deep Studying Pipelines with Tensorflow Prolonged. He confirmed the best way to get began with TFX regionally and the best way to transfer a TFX pipeline from native atmosphere to Vertex AI; and supplied code samples to adapt and get began with TFX.

TFUG São Paulo (Brazil) had a collection of on-line webinars on TensorFlow and TFX. Within the TFX session, they targeted on the best way to put the fashions into manufacturing. They talked concerning the knowledge buildings in TFX and implementation of the primary pipeline in TFX: ingesting and validating knowledge.

TFUG Stockholm hosted MLOps, TensorFlow in Manufacturing, and TFX protecting why, what and how one can successfully leverage MLOps finest practices to scale ML efforts and had a take a look at how TFX can be utilized for designing and deploying ML pipelines.

Cloud AI

ML GDE Chansung Park (Korea) wrote MLOps System with AutoML and Pipeline in Vertex AI on GCP official weblog. He confirmed how Google Cloud Storage and Google Cloud Capabilities might help handle knowledge and deal with occasions within the MLOps system.

He additionally shared the Github repository, Steady Adaptation with VertexAI’s AutoML and Pipeline. This accommodates two notebooks to show the best way to automate to supply a brand new AutoML mannequin when the brand new dataset is available in.

TFUG Northwest (Portland) hosted The State and Way forward for AI + ML/MLOps/VertexAI lab walkthrough. On this occasion, ML GDE Al Kari (USA) outlined the know-how panorama of AI, ML, MLOps and frameworks. Googler Andrew Ferlitsch had a discuss Google Cloud AI’s definition of the 8 levels of MLOps for enterprise scale manufacturing and the way Vertex AI matches into every stage. And MLOps engineer Chris Thompson coated how straightforward it’s to deploy a mannequin utilizing the Vertex AI instruments.

Analysis

ML GDE Qinghua Duan (China) launched a video which introduces Google’s newest 540 billion parameter mannequin. He launched the paper PaLM, and described the essential coaching course of and improvements.

ML GDE Rumei LI (China) wrote weblog postings reviewing papers, DeepMind’s Flamingo and Google’s PaLM.



RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular