Research Article 
								Coherent Filters of Pseudocomplemented 1-Distributive Lattices
								
									
										
											
											
												Chandrani Nag ,
											
										
											
											
												Syed Md Omar Faruk*
,
											
										
											
											
												Syed Md Omar Faruk* 
											
										
									
								 
								
									
										Issue:
										Volume 11, Issue 3, September 2025
									
									
										Pages:
										60-65
									
								 
								
									Received:
										25 August 2025
									
									Accepted:
										3 September 2025
									
									Published:
										25 September 2025
									
								 
								
								
								
									
									
										Abstract: This work explores coherent filters in the framework of pseudocomplemented 1-distributive lattices. After reviewing the basic properties of such lattices and their pseudocomplements, we introduce the notion of coherent filters and establish conditions under which a filter is coherent. The study further examines the relationships between coherent, strongly coherent, and τ-closed filters, showing how these concepts interact with classical structures such as p-filters and D-filters. Several equivalent characterizations are derived, linking coherence with closure, pseudocomplements, and annihilators. In addition, we investigate semi Stone and Stone lattices, proving that a pseudocomplemented 1-distributive lattice is semi Stone precisely when every τ-closed filter is strongly coherent. This provides a new structural perspective on the role of coherence in lattice theory. By generalizing results previously known in distributive lattices, the paper offers a unified approach to understanding filter behavior in broader algebraic settings, with potential implications for further developments in lattice theory and related algebraic systems.
										Abstract: This work explores coherent filters in the framework of pseudocomplemented 1-distributive lattices. After reviewing the basic properties of such lattices and their pseudocomplements, we introduce the notion of coherent filters and establish conditions under which a filter is coherent. The study further examines the relationships between coherent, s...
										Show More
									
								
								
							
							
								Research Article 
								Use of Reinforcement Learning to Gain the Nash Equilibrium
								
									
										
											
											
												Reza Habibi* 
											
										
									
								 
								
									
										Issue:
										Volume 11, Issue 3, September 2025
									
									
										Pages:
										66-70
									
								 
								
									Received:
										29 August 2025
									
									Accepted:
										13 October 2025
									
									Published:
										31 October 2025
									
								 
								
									
										
											
												DOI:
												
												10.11648/j.ml.20251103.12
											
											Downloads: 
											Views: 
										
										
									
								 
								
								
									
									
										Abstract: Reinforcement learning (RL) is a type of machine learning where an agent learns optimal behavior through interaction with its environment. It is a machine learning training method that trains software to make certain desired actions. Nash equilibrium (SNE) is a combination of actions of the different players, in which no coalition of players can cooperatively deviate. Each player chooses the best strategy among all options. Nash equilibrium occurs when each player knows the strategy of their opponent and uses that knowledge. Nash equilibrium occurs in non-cooperative games when two players have optimal game strategies such that no matter how they change their strategy. This paper explores the application of reinforcement learning algorithms within the domain of game theory, with a particular focus on their convergence properties toward Nash equilibrium. We analyze q-learning approach in 2-agent environments, highlighting their capacity to learn optimal strategies through iterative interactions. Our theoretical investigation examines the conditions under which these algorithms converge to Nash equilibrium, considering factors such as learning rate schedules. The insights gained contribute to a deeper understanding of how reinforcement learning can serve as a powerful tool for equilibrium computation in complex strategic environments, paving the way for advanced applications in economics, automated negotiations, and autonomous systems.
										Abstract: Reinforcement learning (RL) is a type of machine learning where an agent learns optimal behavior through interaction with its environment. It is a machine learning training method that trains software to make certain desired actions. Nash equilibrium (SNE) is a combination of actions of the different players, in which no coalition of players can co...
										Show More