Part of
Representation Learning Lab

Computer Engineering, University of Kurdistan, Iran

About Us

The AML team is part of the Representation Learning Lab, affiliated with the Department of Computer Engineering at the University of Kurdistan, Iran.

Algebraic Machine Learning is a machine learning approach using the mathematics of Model Theory to naturally embed what we know about data and formal knowledge into a discrete algebraic structure.

Our algorithms build upon linear algebraic, geometric, probabilistic, and deep learning operations. Additionally, we attempt to develop theoretical foundations of the effect and principles underlying our algorithmic approaches.


Our current algebraic machine learning projects are mainly focused on the following directions:

  • To learn data representations to facilitate data understanding based on visual observation.

  • To understand and mitigate the trade-off between model robustness and accuracy by both theoretical and empirical studies.

  • To seek effective solutions addressing the deficiency of large-scale, noisy data with high redundancy, missing and small data situations, and inadequate labeling cases.

  • To leverage multimodal data collected from different information resources and characterized by different feature views.

AML Team

Dr. Fardin Akhlaghian

Fardin is an associate professor of Computer engineering at the University of Kurdistan. His research focuses on machine learning, computer vision, and data mining. He did his PhD in Computer Vision at the University of Wollongong in 2005. He held a master's degree in Telecommunications and Signal Processing from the University of Tarbiat Modarres in 1992.

Associate Professor
Supervisor
Amjad Seyedi

Amjad is a Doctoral researcher at the University of Mons working on Matrix Theory and Optimization. He received his Master's in Artificial Intelligence from the Department of Computer Engineering at the University of Kurdistan. His work mainly focused on matrix factorization, low-rank approximation, and representation learning.

Advisor
Former Team Leader

PhD Students

Wafa Barkhoda
Sep 2022 - present

Distributionally Robust Learning

Wafa is a faculty member of the Computer Engineering Department, at the Kermanshah University of Technology.

Reza Mahmoodi
Sep 2022 - present

Multi-aspect Learning

Reza has completed his master's degree in AML team.


Master's Students

Sarina Khaledian
2022 - present

Multi-Objective Recommendation Systems

Shirin Moradi
2022 - present

Anomaly Detection

Zahra Mohseni Rad
2023 - present

Hyperspectral Unmixing

Negin Jabbari
2023 - present

Multi-View Representation


Alumni (Master's)

Navid Salahian
2020 - 2022

Deep Self-Representation Learning

Jovan Chavoshinejad
2020 - 2022

Self-supervised Semi-supervised Learning

Maryam Mozafari
2021 - 2022

Unsupervised Feature Selection
(PhD student at the UQO, Canada)

Reza Mahmoodi
2021 - 2022

Link Prediction by Adversarial Training
(PhD student at the UOK, Iran)

Akram Hajiveiseh
2020 - 2023

Directed Graph Clustering

Setareh Mohammadi
2021 -2023

Robust Data Representation

Mohammad Faraji
2021 - 2023

Multi-label feature selection

Arina Mohammadi
2022 - 2023

Attributed Graph Clustering
(Co-Supervisor: Dr. Pir Mohammadiani)

Sayvan SoleymanBeigi
2022 - 2024

Text Clustering/Topic Modeling
(Co-Supervisor: Dr. Daneshfar)







Research



We are looking for
Postdoc and research assistant with expertise in machine learning

Publication

Diverse Joint Nonnegative Matrix Tri-Factorization for Attributed Graph Clustering

This paper proposes the Diverse Joint Nonnegative Matrix Tri-Factorization (Div-JNMTF), an embedding based model to detect communities in attributed graphs. The novel JNMTF model attempts to extract two distinct node representations from topological and non-topological data. Simultaneously, a diversity regularization technique utilizing the Hilbert-Schmidt Independence Criterion (HSIC) is employed. Its objective is to reduce redundant information in the node representations while encouraging the distinct contributions of both types of information.

Applied Soft Computing, 2024


Enhancing Link Prediction through Adversarial Training in Deep Nonnegative Matrix Factorization

This paper proposes a novel Link Prediction using Adversarial Deep NMF (LPADNMF) to enhance the generalization of network reconstruction in sparse graphs. The main contribution is the introduction of an adversarial training that incorporates a bounded attack on the input, leveraging the $\ell_{2,1}$ norm to generate diverse perturbations. This adversarial training aims to improve the model's robustness and prevent overfitting, particularly in scenarios with limited training data.

Engineering Applications of Artificial Intelligence, 2024


Unsupervised Feature Selection using Orthogonal Encoder-Decoder Factorization

This paper proposes the Orthogonal Encoder-Decoder factorization for unsupervised Feature Selection (OEDFS) model, combining the strengths of self-representation and pseudo-supervised approaches. This method draws inspiration from the self-representation properties of autoencoder architectures and leverages encoder and decoder factorizations to simulate a pseudo-supervised feature selection approach. To further enhance the part-based characteristics of factorization, we incorporate orthogonality constraints and local structure preservation restrictions into the objective function.

Information Sciences, 2024


Multi-Label Feature Selection with Global and Local Label Correlation

This paper proposes a feature selection model which exploits explicit global and local label correlations to select discriminative features across multiple labels. In addition, by representing the feature matrix and label matrix in a shared latent space, the model aims to capture the underlying correlations between features and labels. The shared representation can reveal common patterns or relationships that exist across multiple labels and features. An objective function involving L2,1-norm regularization is formulated, and an alternating optimization-based iterative algorithm is designed to obtain the sparse coefficients for multi-label feature selection.

Expert Systems with Applications, 2024


Deep Asymmetric Nonnegative Matrix Factorization for Graph Clustering

This paper proposes a graph-specific Deep NMF model based on the Asymmetric NMF which can handle undirected and directed graphs. Inspired by hierarchical graph clustering and graph summarization approaches, the Deep Asymmetric Nonnegative Matrix Factorization (DAsNMF) is introduced for the directed graph clustering problem. In a pseudo-hierarchical clustering setting, DAsNMF decomposes the input graph to extract low-level to high-level node representations and graph representations (summarized graphs).

Pattern Recognition, 2024


Link Prediction by Adversarial Nonnegative Matrix Factorization

We proposed a novel link prediction method based on adversarial NMF, which reconstructs a sparse network by an efficient adversarial training algorithm. Unlike the conventional NMF methods, our model considers potential test adversaries beyond the pre-defined bounds and provides a robust reconstruction with good generalization power. Besides, to preserve the local structure of a network, we use the common neighbor algorithm to extract the node similarity and apply it to low-dimensional latent representation.

Knowledge-based Systems, 2023


Self-Supervised Semi-Supervised Nonnegative Matrix Factorization for Data Clustering

In this paper, we design an effective Self-Supervised Semi-Supervised Nonnegative Matrix Factorization (S4NMF) in a semi-supervised clustering setting. The S4NMF directly extracts a consensus result from ensembled NMFs with similarity and dissimilarity regularizations. In an iterative process, this self-supervisory information will be fed back to the proposed model to boost semi-supervised learning and form more distinct clusters.

Pattern Recognition, 2023


Elastic Adversarial Deep Nonnegative Matrix Factorization for Matrix Completion

This paper proposes an elastic adversarial training to design a high-capacity Deep Nonnegative Matrix Factorization (DNMF) model with proper discovery latent structure of the data and enhanced generalization abilities. In other words, we address the challenges mentioned above by perturbing the inputs in DNMF with an elastic loss which is intercalated and adapted between Frobenius and L2,1 norms. This model not only dispenses with adversarial DNMF generation but also is robust towards a mixture of multiple attacks to attain improved accuracy.

Information Sciences, 2023


Deep Autoencoder-like NMF with Contrastive Regularization and Feature Relationship Preservation

This paper proposes the Deep Autoencoder-like NMF with Contrastive Regularization and Feature Relationship preservation (DANMF-CRFR) to address the data representation challenges. Inspired by contrastive learning, this deep model is able to learn discriminative and instructive deep features while adequately enforcing the local and global structures of the data to its decoder and encoder components. Meanwhile, DANMF-CRFR also imposes feature correlations on the basis matrices during feature learning to improve part-based learning capabilities.

Expert Systems with Applications, 2023


Self-Paced Multi-Label Learning with Diversity

In this paper, we propose a self-paced multi-label learning with diversity (SPMLD) which aims to cover diverse labels with respect to its learning pace. In addition, the proposed framework is applied to an efficient correlation-based multi-label method. The non-convex objective function is optimized by an extension of the block coordinate descent algorithm. Empirical evaluations on real-world datasets with different dimensions of features and labels imply the effectiveness of the proposed predictive model.

Asian Conference on Machine Learning, 2019


A weakly-Supervised Factorization Method with Dynamic Graph Embedding

In this paper, a dynamic weakly supervised factorization is proposed to learn a classifier using NMF framework and partially supervised data. Also, a label propagation mechanism is used to initialize the label matrix factor of NMF. Besides a graph based method is used to dynamically update the partially labeled data in each iteration. This mechanism leads to enriching the supervised information in each iteration and consequently improves the classification performance.

Artificial Intelligence and Signal Processing Conference (AISP), 2017

Contact Us

  • CONTACT INFO
  • Contact us and we'll get back to you within 24 hours.

    Algebraic Machine Learning (222), Computer Engineering Department, University of Kurdistan, Sanandaj, Iran

    amjadseyedi@uok.ac.ir