Modern Algorithms: Multivariate Public Key Cryptography (MPKC)

Modern Algorithms: Multivariate Public Key Cryptography (MPKC)

Multivariate-based algorithms are a powerful class of modern cryptographic algorithms - particularly, due to of their potential ability to secure against quantum computer attacks. They typically operate by using multivariate polynomials over finite fields for constructing public key schemes. Examples of such algorithms include Enhanced Tame Transformation Signature, Hidden Field Equations, Matsumoto-Imai Scheme, and Unbalanced Oil & Vinegar.

The core idea of MKPC algorithms is to ensure that one-directional compute (encryption & signature generation) is easy but inversion (decryption & signature verification) is NP-hard without a secret key. A central map is used to encode the multivariate polynomials across multiple variables. This map is then encrypted with two sets of affine transformations that serve as the trapdoor function.

From the world of Advanced Mathematics: Knot Theory

From the world of Advanced Mathematics: Knot Theory

Studied as part of Topology, mathematical knots are closed loops in 3D-space. The core idea of Knot Theory is to classify & manipulate knots to understand complex mathematical behavior. Knots are of various types - e.g., composite, prime, ribbon, satellite, torus, trivial (unknots), etc.

Knot Theory is extensively applied in different fields like biology (e.g., understanding processes like DNA replication), chemistry (e.g., development of new molecules), physics (e.g., study of electromagnetic field lines & quantum fields), and computer science (e.g., computational topology & visualization.)

Modern research in Knot Theory aims to address its unsolved problems (e.g., Slice-Ribbon Conjecture), study its application in quantum physics, and create efficient algorithms for classifying knots & for calculating knot invariants.

A peek into my AI Projects: Emotion Recognition System

A peek into my AI Projects: Emotion Recognition System

My team & I developed a multi-model framework to identify human emotions (and their intensity) from visual expressions & voice timbre. Apart from developing models with high accuracy, ensuring engineering efficiency (e.g., inferencing on resource-constrained systems) was also a key objective.

Facial inputs were processed by a hybrid architecture of standard CNNs, and Vision Transformers. Voice inputs were processed through Mel-Frequency Cepstral Coefficients, and a combination of RNNs & LSTMs. Post the input processing, information from both modes were fused together to generate the final emotion category & intensity level. The future direction of this research was to integrate EEG (electroencephalography) data with the audio-visual data to provide a more holistic understanding of emotions.

Monolith or Microservices? The Debate Continues

Amazon Prime Video’s March 2023 announcement of migrating from distributed microservices to a monolith architecture surprised many in the global developer community, and reignited the Monolith versus Microservices debate. A varied range of analyses surfaced after that – some supported the move, some inferred that

Degree-Constrained Minimum Spanning Trees

Minimum Spanning Tree (MST) offers powerful applications in a wide range of domains, including circuit design, computer science, electrical grids, financial markets, and telecom networks. They are also indirectly leveraged (e.g., as algorithm subroutines) for solving other critical problems, such as the Traveling Salesman or Minimum

NeurIPS 2022: My Top Two ‘Practically-Relevant’ Papers from the Outstanding 13

[siteorigin_widget class=”thinkup_builder_divider”][/siteorigin_widget] NeurIPS 2022 declared 13 submissions as outstanding papers from its main track. Is Out-of-distribution Detection Learnable? Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding Elucidating the Design Space of Diffusion-Based Generative Models ProcTHOR: Large-Scale Embodied AI Using Procedural Generation Using Natural Language and

The quantum entanglement of atomic clocks

Researchers at the University of Oxford have achieved the quantum entanglement of two strontium-based optical clocks. While previous entanglement demonstrations were limited to microscopic distances, this one appears to be the first reported case of macroscopic entanglement. Simply put, quantum entanglement is a phenomenon that

Recommended AI Papers: August 2022

3D Vision with Transformers: A Survey: https://arxiv.org/pdf/2208.04309v1.pdf Unifying Visual Perception by Dispersible Points Learning: https://arxiv.org/pdf/2208.08630v1.pdf ZoomNAS: Searching for Whole-body Human Pose Estimation in the Wild: https://arxiv.org/pdf/2208.11547v1.pdf ROLAND: Graph Learning Framework for Dynamic Graphs: https://arxiv.org/pdf/2208.07239v1.pdf Investigating Efficiently Extending Transformers for Long Input Summarization: https://arxiv.org/pdf/2208.04347v1.pdf Semantic-Aligned Matching

Recommended AI Papers: July 2022

High-Performance GPU-to-CPU Transpilation and Optimization via High-Level Parallel Constructs: https://arxiv.org/pdf/2207.00257.pdf Branchformer: Parallel MLP-Attention Architectures to Capture Local and Global Context for Speech Recognition and Understanding: https://arxiv.org/pdf/2207.02971v1.pdf More ConvNets in the 2020s: Scaling up Kernels Beyond 51 × 51 using Sparsity: https://arxiv.org/pdf/2207.03620v1.pdf Softmax-free Linear Transformers: https://arxiv.org/pdf/2207.03341v1.pdf

Recommended AI Papers: June 2022

Scaling Vision Transformers: https://arxiv.org/pdf/2106.04560.pdf Learning Efficient Vision Transformers via Fine-Grained Manifold Distillation: https://arxiv.org/pdf/2107.01378.pdf Risk-averse autonomous systems: A brief history and recent developments from the perspective of optimal control: https://arxiv.org/pdf/2109.08947.pdf LightSeq2: Accelerated Training for Transformer-based Models on GPUs: https://arxiv.org/pdf/2110.05722.pdf Conditionally Elicitable Dynamic Risk Measures For Deep

Recommended AI Papers: May 2022

Computational Storytelling And Emotions: A Survey: https://arxiv.org/pdf/2205.10967.pdf Are Large Pre-Trained Language Models Leaking Your Personal Information?: https://arxiv.org/pdf/2205.12628.pdf FreDo: Frequency Domain-based Long-Term Time Series Forecasting: https://arxiv.org/pdf/2205.12301.pdf A Survey on Long-tailed Visual Recognition: https://arxiv.org/pdf/2205.13775.pdf Understanding Factual Errors in Summarization: Errors, Summarizers, Datasets, Error Detectors: https://arxiv.org/pdf/2205.12854.pdf On the

Recommended AI Papers: April 2022

Multiview Transformers for Video Recognition: https://arxiv.org/pdf/2201.04288.pdf ViNTER: Image Narrative Generation with Emotion-Arc-Aware Transformer: https://arxiv.org/pdf/2202.07305.pdf Privacy-preserving Anomaly Detection in Cloud Manufacturing via Federated Transformer: https://arxiv.org/pdf/2204.00843.pdf A Tour of Visualization Techniques for Computer Vision Datasets: https://arxiv.org/pdf/2204.08601.pdf Transfer Attacks Revisited: A Large-Scale Empirical Study in Real Computer Vision

Recommended AI Papers: March 2022

Near-optimal Offline Reinforcement Learning with Linear Representation: Leveraging Variance Information with Pessimism: https://arxiv.org/pdf/2203.05804.pdf Augmented Reality and Robotics: A Survey and Taxonomy for AR-enhanced Human-Robot Interaction and Robotic Interfaces: https://arxiv.org/pdf/2203.03254.pdf A Fast and Convergent Proximal Algorithm for Regularized Nonconvex and Nonsmooth Bi-level Optimization: https://arxiv.org/pdf/2203.16615.pdf Monte Carlo

Recommended AI papers: Feb 16 – 28, 2022

Is Neuro-Symbolic AI Meeting its Promise in Natural Language Processing? A Structured Review: https://arxiv.org/pdf/2202.12205.pdf NeuralFusion: Neural Volumetric Rendering under Human-object Interactions: https://arxiv.org/pdf/2202.12825.pdf Deep Generative model with Hierarchical Latent Factors for Time Series Anomaly Detection: https://arxiv.org/pdf/2202.07586.pdf Deep Recurrent Modelling of Granger Causality with Latent Confounding: https://arxiv.org/pdf/2202.11286.pdf

Recommended AI papers: Feb 1 – 15, 2022

LaMDA: Language Models for Dialog Applications: https://arxiv.org/pdf/2201.08239v3.pdf Data-Driven Offline Optimization For Architecting Hardware Accelerators: https://arxiv.org/pdf/2110.11346v3.pdf Don’t Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis: https://arxiv.org/pdf/2202.07728v1.pdf Block-NeRF: Scalable Large Scene Neural View Synthesis: https://arxiv.org/pdf/2202.05263v1.pdf Maintaining fairness across distribution shift: do we have viable

Recommended AI papers: Jan 16 – 31, 2022

A Systematic Exploration Of Reservoir Computing For Forecasting Complex Spatiotemporal Dynamics: https://arxiv.org/pdf/2201.08910.pdf FEDformer: Frequency Enhanced Decomposed Transformer for Long-term Series Forecasting: https://arxiv.org/pdf/2201.12740.pdf Quantifying Epistemic Uncertainty in Deep Learning: https://arxiv.org/pdf/2110.12122.pdf What’s Wrong With Deep Learning In Tree Search For Combinatorial Optimization: https://arxiv.org/pdf/2201.10494.pdf A Leap among Quantum

The Cost-Competitiveness of Renewable Energy

Renewable energy has historically been costlier than fossilized energy. This is changing now. Lazard’s latest annual report on Levelized Cost of Energy Analysis highlighted that certain renewable energy technologies are becoming cost-competitive vis-à-vis conventional energy technologies. For example, see the chart below. Source: Levelized Cost

The Significance of Lagrange Points

Lagrange Points (or L-points) recently came into the limelight during the launch of the James Webb Space Telescope (JWST). Named after the famous mathematician Joseph-Louis Lagrange, these are special points in space where the net gravitational forces of the earth and the sun (or to

Recommended AI papers: Jan 1 – 15, 2022

Beyond Simple Meta-Learning: Multi-Purpose Models for Multi-Domain, Active and Continual Few-Shot Learning: https://arxiv.org/pdf/2201.05151.pdf A unified software/hardware scalable architecture for brain-inspired computing based on self-organizing neural models: https://arxiv.org/pdf/2201.02262v1.pdf MGAE: Masked Autoencoders for Self-Supervised Learning on Graphs: https://arxiv.org/pdf/2201.02534v1.pdf Applications of Signature Methods to Market Anomaly Detection: https://arxiv.org/pdf/2201.02441v1.pdf

Recommended AI papers: Dec 16 – 31, 2021

A Globally Convergent Distributed Jacobi Scheme for Block-Structured Non-convex Constrained Optimization Problems: https://arxiv.org/pdf/2112.09027.pdf A Robust Optimization Approach to Deep Learning: https://arxiv.org/pdf/2112.09279.pdf A Simple Single-Scale Vision Transformer for Object Localization and Instance Segmentation: https://arxiv.org/pdf/2112.09747.pdf A Survey of Natural Language Generation: https://arxiv.org/pdf/2112.11739.pdf Are Large-scale Datasets Necessary for

Recommended AI papers: Dec 1 – 15, 2021

Simulation Intelligence: Towards A New Generation Of Scientific Methods: https://arxiv.org/pdf/2112.03235v1.pdf Information is Power: Intrinsic Control via Information Capture: https://arxiv.org/pdf/2112.03899v1.pdf GLaM: Efficient Scaling of Language Models with Mixture-of-Experts: https://arxiv.org/pdf/2112.06905v1.pdf Creating Multimodal Interactive Agents with Imitation and Self-Supervised Learning: https://arxiv.org/pdf/2112.03763v1.pdf Efficient Geometry-aware 3D Generative Adversarial Networks: https://arxiv.org/pdf/2112.07945v1.pdf

Recommended AI papers: Nov 16 – 30, 2021

Covariate Shift in High-Dimensional Random Feature Regression: https://arxiv.org/pdf/2111.08234v1.pdf Improving Transferability of Representations via Augmentation-Aware Self-Supervision: https://arxiv.org/pdf/2111.09613v1.pdf GFlowNet Foundations: https://arxiv.org/pdf/2111.09266v1.pdf Stochastic Variance Reduced Ensemble Adversarial Attack for Boosting the Adversarial Transferability: https://arxiv.org/pdf/2111.10752v1.pdf Benchmarking Detection Transfer Learning with Vision Transformers: https://arxiv.org/pdf/2111.11429v1.pdf Improved Knowledge Distillation via Adversarial Collaboration:

Recommended AI papers: Nov 1 – 15, 2021

Gradients are Not All You Need: https://arxiv.org/pdf/2111.05803v1.pdf RAVE: A variational autoencoder for fast and high-quality neural audio synthesis: https://arxiv.org/pdf/2111.05011v1.pdf NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework: https://arxiv.org/pdf/2111.04130v1.pdf A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware Image Synthesis: https://arxiv.org/pdf/2110.15678v2.pdf On Representation Knowledge

Recommended AI papers: Oct 16 – 31, 2021

Shaking the foundations: delusions in sequence models for interaction and control: https://arxiv.org/pdf/2110.10819v1.pdf Understanding Dimensional Collapse in Contrastive Self-supervised Learning: https://arxiv.org/pdf/2110.09348v1.pdf SOFT: Softmax-free Transformer with Linear Complexity: https://arxiv.org/pdf/2110.11945v2.pdf Understanding How Encoder-Decoder Architectures Attend: https://arxiv.org/pdf/2110.15253v1.pdf Parameter Prediction for Unseen Deep Architectures: https://arxiv.org/pdf/2110.13100v1.pdf From Machine Learning to Robotics: Challenges

Recommended AI papers: Oct 1 – 15, 2021

Multitask prompted training enables zero-shot task generalization: https://arxiv.org/pdf/2110.08207v1.pdf DETR3D: 3D Object Detection from Multi-view Images via 3D-to-2D Queries: https://arxiv.org/pdf/2110.06922v1.pdf Object DGCNN: 3D Object Detection using Dynamic Graphs: https://arxiv.org/pdf/2110.06923v1.pdf Symbolic Knowledge Distillation: from General Language Models to Common sense Models: https://arxiv.org/pdf/2110.07178v1.pdf Graph Neural Networks with Learnable

Architecting AI Applications

It is common knowledge that efficient architecture design is a key aspect of building any product/solution, including AI applications. However, in reality, it is often observed that companies pay limited attention to developing robust end-to-end architectures before initiating AI application development. This leads to several

Can Traversable Wormholes Be Real?

It has been over ninety years since the fundamental concept of Schwarzschild Wormholes (also called Einstein–Rosen Bridges) was first proposed. Over the years, scientists have been trying to determine if wormholes are actually a physical reality, and if they do exist, whether they are traversable

A New Legal Framework for AI

The European Union has just released its first legal framework for Artificial Intelligence. It covers a wide range of areas, including:▪︎ Defining the fundamental notion of an AI system.▪︎ Laying down the requirements for high risk AI systems, and obligations of their operators.▪︎ Prohibiting certain

China’s Super-Scale Intelligence Model System

The Beijing Academy of Artificial Intelligence (BAAI) released China’s first super-scale intelligence model system: WuDao 1.0. This is a combination of four very large-scale NLP models. WenYuan: China’s largest pre-training language model (supporting Chinese & English) for text categorization, sentiment analysis, reading comprehension, etc. It

The Current State of AutoML

Automated Machine Learning has come a long way since Google Brain introduced NAS in early 2017. Amazon’s AutoGluon, Google’s AutoMLZero, Salesforce’s TransmogrifAI, the AutoML features of Azure, H2O, Scikit-learn, Keras & others (TPOT, DataRobot, etc.) are witnessing increased adoption. Modern AutoML systems generally focus on hyper-parameter optimization (HPO), neural architecture search (NAS), model

The GEM Benchmark for Natural Language Generation

Earlier this year, 55 researchers from 44 global institutions proposed GEM (Generation, Evaluation & Metrics), a new benchmark environment for Natural Language Generation. It evaluates models through an interactive result exploration system. This enables a much better understanding of model limitations & improvement opportunities, and

Design Patterns for building AI & ML Applications

Design Patterns are reusable, formalized constructs that serve as templates to address common problems in designing efficient systems. These enable the development of high-performance, resilient & robust applications. Widely-used design patterns, especially from the object-oriented paradigm, include:▪︎ Behavioral Patterns: Command, Mediator, Memento, Observer, Visitor, etc.▪︎ Creational Patterns:

The State of Play in Emotion AI

Emotion AI, also known as Affective Computing or Artificial Emotional Intelligence, is an interdisciplinary field that operates at the intersection of Behavioral Science, Cognitive Computing, Computer Science, Machine Learning, Neuroscience, Psychology, Signal Processing, and others. This is one of the rapidly evolving areas of AI

Navigating R&D Organizations Through Economic Turbulence – Four Principal Strategies For A Strong (Post-Crisis) Emergence

Extraordinary threats create extraordinary opportunities. Research & Development programs offer a great mechanism to explore/exploit such opportunities, thus enabling significant long-term value creation. At the same time, R&D functions are usually among the first to get financially impacted when a crisis strikes. Navigating R&D organizations

Knowledge Graphs in AI Development

A common grievance of most enterprises is that while data is abundant, there is not enough knowledge. Data is the symbolic representation of the observable properties of real-world entities and, on its own, yields limited practical value. Knowledge, on the other hand, is ‘meaningful data’