Wednesday, 31 August 2016

Numerical Analysis of the Heat Transfer in Hetero junction Device for Optoelectronic Applications

Physical and numerical descriptions related to the heat transfer phenomenon inside the multilayer nanomaterial of thin film are determined. The mathematical model, of a multilayer of thin film of tin dioxide that deposits on a composite substrate of Silicon Dioxide/Silicon, is studied and solved by two numerical techniques, by taking into account the variability of the thermal conductivity. 

Numerical Analysis of the Heat Transfer
The two main interests in this study are the determination of the value of the applied maximum temperature on the multilayer nanomaterial, and the analysis, of theeffect of the porosity medium that exists between certain layers, on the heattransfer. In plus, in order to determine our system physical parameters, the influence of the thickness of the thin deposit film is studied and the numerical model, which estimates these values in the hetero junction device, is analysed.

With the continued reduction of dimensions of technological devices, the heat produced can be important, component failures can occur. According to NASA, 90% of failures are due to defects and thermal interconnects, according to the USAir Force, 55% of electronic failures are due to thermal effects

Tuesday, 30 August 2016

Comparison between Robust and Classical Analysis in Bivariate Logistic for Medical Data

Representing medical data and biological important part in experiments are concerned with Human life, the primary objective of this research is to use the statisticaloptimization method analysis for the data and knowledge of the important factors affecting the variables of the study (liver fat, liver size), where the variables are interconnected there is a need for statistical method to examines the degree of their relationship, we used bivariate logistic. 

Bivariate logistic
To achieve the of the research on the field study will be done in Al-Sadr medical city in the province of Najaf by taking a sample of 150 people auditors diabetes and liver disease center, from the statistical analysis results we observed the degree of diagnosis model in both method are good, and also we monitored that impactfactors in responses (liver fat, liver size) and some comment as multivariatelogistic in the Future.

This research(study) aims to review the method of bivariate logistic distribution in order to study and analysis effecting factors on the response variables (the degree of liver fatty and increase of liver size in Human beings) using the data of medical tests to compare classical and robust analysis when some values are outlier in the sample.

Monday, 29 August 2016

Time Scan Statistics for High-risk Clusters of Tuberculosis (TB) Disease

Tuberculosis (TB) is a disease caused by bacteria that are spread through the air from person to person. If not treated properly, TB disease can be fatal. People infected with TB bacteria who are not sick may still need treatment to prevent TB disease from developing in the future. 

Time Scan Statistics
Tuberculosis (TB) is currently one of the greatest problems in public health. Mycobacterium tuberculosis infects about one third of the world's population, of whom more than 80% are living in developing countries. The incidence andprevalence of TB are very different in various parts of Iran and alsothroughout the world. Learn to recognize the symptoms of TB disease and find out if you are at risk.

Friday, 26 August 2016

Algebra, Hyperalgebra and Lie-Santilli Theory

The theory of hyperstructures can offer to the Lie-Santilli Theory a variety of models to specify the mathematical representation of the related theory. In this paper we focus on the appropriate general hyperstructures, especially on hyperstructures with hyperunits. We define a Lie hyperalgebra over a hyperfieldas well as a Jordan hyperalgebra, and we obtain some results in this respect. Finally, by using the concept of fundamental relations we connect hyper algebras to Lie algebras and Lie-Santilli-addmissible algebras.

Hyperalgebra
The structure of the laws in physics is largely based on symmetries. The objects in Lie theory are fundamental, interesting and innovating in both mathematics and physics. It has many applications to thespectroscopy of molecules, atoms, nuclei and hadrons. The central role of Lie algebra in particle physics is well known. A Lie-admissible algebra, introduced by Albert, is a (possibly non-associative) algebra that becomes a Lie algebra under the bracket [a,b] = ab − ba. Examples include associative algebras, Lie algebras and Okubo algebras. Lie admissible algebras arise in various topics, including geometry of invariant affine connections on Lie groups and classical and quantum mechanics.

Wednesday, 24 August 2016

On Duality of Multiobjective Rough Convex Programming Problems

Duality assertions are very important in optimization from the theoretical as well as from the numerical point of view. So this paperpresents duality of multiobjective rough convex programming problems in roughenvironment when the multiobjective function is deterministic and roughness is in feasible region. 

Rough set
Also it discussed the duality when roughness in multiobjective function and the feasible region is deterministic. The conceptsand some theorems of duality in the rough environment are discussed. Also, the procedure of solution of these kind of problems described.

The mathematical programming in rough environment was introduced. The multiobjective rough convex programming problem (MRCPP) can be classified according to existence of roughness is in multiobjective function or constraints. The mathematical problems can be classified into three classes. First class: MRCPP with rough feasible set and deterministic multiobjective function. Second class: problems with deterministic feasible set and rough multiobjective function.

Tuesday, 23 August 2016

A Joint Model for a Longitudinal Pulse Rate and Respiratory Rate of Congestive Heart Failure Patients

Acute Coronary Failure; ADHD: Attention- Deficit Hyperactivity Disorder; AF: Acute Failure; AIC: Akaki’s Information Criteria; AICC: Akaki’s Information Criteria Correction; ANOVA: Analysis of variance; AOE: Association of the Evolutions; BIC: Bayesian Information Criteria; BMI: Body Mass Index; BNP: Brain Natriuretic Peptide; BS: Between Subjects

Pulse Rate

CHD: Coronary Heart Disease; CHF: Congestive heart failure; CI: Confidence interval; COPD: Chronic obstructive pulmonary disease; DBP: Diastolic Blood Pleasure; EOA: Evolution Of Association; GLM: Generalized linear model; HF: Heart failure; HR: Heart Rate; LMM: Linear mixed model; LVEF: Left Ventricle Ejection Fraction; ML: Maximum Likelihood; MRN: Medical registration number; NYHA: New York Heart Association; 

QOL: Quality Of Life; REML: Restricted Maximum Likelihood; RSA: Respiratory Sinus Arrhythmia; SBP: Systolic Blood Pleasure; SNRI: Serotonergic and Noradrenergic Working Antidepressants; TCA: Tricyclic Antidepressants; WS: Within Subjects.

Friday, 19 August 2016

Comparison of Macrodosimetric Efficacy of Transarterial Radioembolization (TARE)

Purpose: Transarterial 90Y microspheres radioembolization is emerging as a multidisciplinary promising therapeutic modality for primary and metastatic cancer in the liver. Actually two different type of microspheres are used, whose main characteristic is the different density of activity (activity per microsphere). In this paper the effect due to the possible different distribution of the microspheres in a target is presented and discussed from a macrodosimetric point of view. 


Material and methods: A 100 g virtual soft-tissue target region has been builded. The administered activity was chosen to have a target average absorbed dose of 100 Gy and the number of 90Y microspheres needed was calculated for two different activity-per-microsphere values (2500) Bq/microsphere and 50 Bq/microsphere, respectively). The spheres were randomly distributed in the target and the Dose Volume Histograms were obtained for both. The cells surviving fractions (SF) for four different values of the radiobiological parameter α were calculated from the Linear - Quadratic model. 

Macrodosimetric Efficacy

Results: The DVH obtained are very similar and the SF is almost equal for both the activity-per- microsphere values. Conclusions: This macrodosimetric approach shows no radiobiological difference between the glass and resin microspheres. Thus the different number of microspheres seems to have no effect when the number of spheres is big enough that the distance between the spheres in the target can be considered small compared to the range of the β-particles of 90Y.Read More...

Algebra, Hyperalgebra and Lie-Santilli Theory

The theory of hyperstructures can offer to the Lie-Santilli Theory a variety of models to specify the mathematical representation of the related theory. In this paper we focus on the appropriate general hyperstructures, especially on hyperstructures with hyperunits. We define a Lie hyperalgebra over a hyperfield as well as a Jordan hyperalgebra, and we obtain some results in this respect. Finally, by using the concept of fundamental relations we connect hyper algebras to Lie algebras and Lie-Santilli-addmissible algebras.


The structure of the laws in physics is largely based on symmetries. The objects in Lie theory are fundamental, interesting and innovating in both mathematics and physics. It has many applications to the spectroscopy of molecules, atoms, nuclei and hadrons. The central role of Lie algebra in particle physics is well known.

Algebra

A Lie-admissible algebra, introduced by Albert, is a (possibly non-associative) algebra that becomes a Lie algebra under the bracket [a,b] = ab − ba. Examples include associative algebras, Lie algebras and Okubo algebras. Lie admissible algebras arise in various topics, including geometry of invariant affine connections on Lie groups and classical and quantum mechanics.Read More...

Thursday, 18 August 2016

On the Diophantine Equation 1+5x2=3yn, n>=3

The Diophantine equation x2+C=yn, in positive integers unknowns x, y and n, has a long story. The first case to have been solved appears to be c=1. In 1850 Victor Lebesgue showed, using a elementary factorization argument, that the only solution is x=0, y=1. Over the next 140 years many equations of the form x2+C=yn have been solved using the Lebesgue’s elementary trick. In 1993 John Cohn published an exhautive historical survey of this equation which completes the solution for but all 23 values of C in the range 1 ≤ C ≤ 100.


It has been noted recently, that the result of Bilu, Harnot and Voutier can sometimes be applied to equations of the form x2+C=yn, when instead of C being a fixed integer, C is the product of powers of fixed primes p1,…., pk.

By comparison, The Diophantine equation x2+C=2yn with the same restriction, has been solved partially. For C=1, John Cohn, showed that the only solutions to this equation are x=y=1 and x=239, y=13 and n=4. SZ. Tengely studied the equation x2+q2m=2yp where x, y, q, p, m are integers with m>0 and p, q are odd primes and gcd (x,y)=1. He proved that there are only finitely many solutions (m, p, q, x, y) for which y is not a sum of two consecutive squares.Read More...

Wednesday, 17 August 2016

Alternative Interpretation of the Lorentz-transformation

The Lorentz-Transformation (LT) is the basis of the Theories of Relativity, which are capable of describing the experimentally manifold confirmed relativistic phenomena that deviate from classical physics. Here I present a proof that results in an alternative interpretation of the LT. In particular, the LTcannot be applied to high relative velocities and related space-time modeling – one of the most important tools in physics and astronomy – and will lead to a dead end. Two experiments are proposed to test this idea.

Lorentz-transformation

Suppose two reference frames A and B with identical emitters anddetectors move with constant velocities against each other, but theirvelocities against a fixed point are not known. Due to the measuredchange of frequency, observers in those systems could calculate therelative velocity between them. However, in classical physics a formula for this model does not exist.If an observer in A can assume that he is at rest and B moves withvelocity –v in his direction, then the classical Doppler-formula is valid (with β=v/c).

1 β = − AB f f

This formula is not based on a transmission medium like air for sound. It is sufficient to assume a constant velocity in relation to a reference point outside of this test system. Alternatively, if system B is atrest, and the observer in A moves to B with the velocity +v then a different Doppler-formula is valid.

f f AA = + 0 (1 β )


This is observed outside of frames A and B where the information is transmitted with constant velocity c, independent of the movements of A and B.

Tuesday, 16 August 2016

Matrix Lie Groups: An Introduction

Lie theory, the theory of Lie groups, Lie algebras, and their applications is a fundamental part of mathematics that touches on a broad spectrum of mathematics, including geometry (classical, differential, and algebraic), ordinary and partial differential equations, group, ring, and algebra theory, complex and harmonic analysis, number theory, and physics (classical, quantum, and relativistic).


It typically relies upon an array of substantial tools such as topology, differentiable manifolds and differential geometry, covering spaces, advanced linear algebra, measure theory, and group theory to name a few. However, we will considerably simplify the approach to Lie theory by restricting our attention to the most important class of examples, namely those Lie groups that can be concretely realized as (multiplicative) groups of matrices.

Matrix Lie Groups


Lie theory began in the late nineteenth century, primarily through the work of the Norwegian mathematician Sophus Lie, who called them “continuous groups,” in contrast to the usually finite permutation groups that had been principally studied up to that point. An early major success of the theory was to provide a viewpoint for a systematic understanding of the newer geometries such as hyperbolic, elliptic, and projective, that had arisen earlier in the century. 

Thursday, 11 August 2016

Local Non-similarity Solution for MHD Mixed Convection Flow

Combined heat and mass transfer on mixed convection non-similar flow of electrically conducting nanofluid along a permeable vertical plate in the presence of thermal radiation is investigated. The governing partial differential equations of the problem are transformed into a system of non linear ordinary differential equations by applying the Sparrow–Quack–Boerner local non-similarity method (LNM). Furthermore, the obtained equations are solved numerically by employing the Fourth or fifth order Runge Kutta Fehlberg method with conjunction to shooting technique. 
Local Non-similarity Solution

The profiles of flow and heat transfer are verified by using five types of nanofluids of which metallic or nonmetallic nanoparticles, namely Copper (Cu), Alumina (Al2O3), Copper oxide (CuO), silver (Ag) and Titanium (TiO2) with a water-based fluid. Rosseland approximation model on black body is used to represent the radiative heat transfer. Effects of thermal radiation, buoyancy force parameters and volume fraction of nanofluid on the velocity and temperature profiles in the presence of suction/injection are depicted graphically. Comparisons with previously published works are performed, and excellentagreement between the results is obtained. The conclusion is that the flow fields is affected by these parameters.

Wednesday, 10 August 2016

Biometric Authentication in Cloud Computing

Abstract:Information and telecommunication technology (ICT) has penetrated deep into the human lives and is affecting human life style in different aspects. The rapid growth in ICT has embarked improvement in computing devices and computing techniques. Currently cloud computing is one of the most hyped innovation. It has several positive impactslike reduce cost, increase throughput, ease of use but it also have certainsecurity issues that must be dealt with carefully. There are several techniques that can be used to overcome this major problem. In this paper will analyses biometric authentication in cloud computing, its various techniques and how they are helpful in reducing the security threats. It provides a comprehensive and structured overview of biometric authentication for enhancing cloud security.

Biometric Authentication in Cloud Computing

Description:For the ease of users, concept of cloud computing took popularity in 1990’s though its concepts lasts back to 1960s. Cloud Computing refers to provision of scalable and IT related services to the users through internet. It is a technique of computing in which dynamically scalable and IT related resources are provided as a service through Internet. 


This model permits general, supportive, on-interest system right to use to a common group of configurable figuring assets. These resources are rapidly allocated and unconfined with a minor organization’s effort. Resources may include systems, servers, application programs or any kind of administrative programs.

It provides 3 different kinds of service models:
1. Software as a service has the ability to provide user any software running on a cloud substructure.
2. Platform can also be provided as a service. In this any kind of platform (i.e. tools, library, services).
3. Infrastructure as a service facilitates the user by providing computing resources where user can run the software without having control on underlying infrastructure but has control over the operating system being used.

Four deployment models are used in cloud computing:
2. Community cloud is shared by several users.
3. Private cloud facilitates a private organization.
4. Hybrid cloud structure consists of two or more than two cloud models.

Tuesday, 9 August 2016

Nonholonomic Ricci Flows of Riemannian Metrics and Lagrange-Finsler Geometry

A series of the most remarkable results in mathematics are related to Grisha Perelman’s proof of the Poincare Conjecture built on geometrization (Thurston) conjecture for three dimensional Riemannian manifolds, and R. Hamilton’s Ricci flow theory see reviews and basic references explained by Kleiner. Much of the works on Ricci flows has been performed and validated by experts in the area of geometrical analysis and Riemannian geometry. Recently, a number of applications in physics of the Ricci flowtheory were proposed, by Vacaru.Some geometrical approaches in modern gravity and string theory are connected to the method of moving frames and distributions of geometric objects on (semi) Riemannian manifolds and their generalizations to spaces provided with nontrivial torsion, nonmetricity and/or nonlinear connection structures.

nonlinear regression

The geometry of nonholonomic manifolds and non–Riemannian spaces is largely applied in modern mechanics, gravity, cosmology and classical/quantum field theory expained by Stavrinos. Such spaces are characterized by three fundamental geometric objects: nonlinear connection (N–connection), linear connection and metric. There is an important geometrical problem to prove the existence of the ” best possible” metric and linear connection adapted to a N– connection structure. From the point of view of Riemannian geometry, the Thurston conjecture only asserts the existence of a best possible metric on an arbitrary closed three dimensional (3D) manifold. It is a very difficult task to define Ricci flows of mutually compatible fundamental geometric structures on non–Riemannian manifolds (for instance, on a Finsler manifold). For such purposes, we can also apply the Hamilton’s approach but correspondingly generalized in order to describe nonholonomic (constrained) configurations. The first attempts to construct exact solutions of the Ricci flow equations on nonholonomic Einstein and Riemann–Cartan (with nontrivial torsion) manifolds, generalizing well known classes of exact solutions in Einstein and string gravity, were performed and explanied by Vacaru.


Monday, 8 August 2016

Verification of Some Properties of the C-nilpotent Multiplier in Lie Algebras

The purpose of this paper is to obtain some inequalities and certain bounds for the dimension of the c-nilpotent multiplier of finite dimensional nilpotent Lie algebras and their factor Lie algebras. Also, we give an inequality for the dimension of the c-nilpotent multiplier of L connected with dimension of the Lie algebras γd (L) and L / Zd−1 (L) . Finally, we compare our results with the previously known result.

C-nilpotent Multiplier

All Lie algebras referred to in this article are (of finiteor infinite dimension) over a fixed field F and the square brackets [ , ] denotes the Lie product. Let 0→R→F→L→0 be a free presentation of a Lie algebra L, where F is a free Lie algebra.

This is analogous to the definition of the Baer-invariant of a group with respect to the variety of nilpotent groups of class at most c given by Baer [1-3] (for more information on the Baer invariant of groups).

The purpose of this paper is to obtain some inequalities for the dimension of the c-nilpotent multiplier of finite dimensional nilpotent Lie algebras and their factor Lie algebras (Corollary 2.3 and Corollary 2.5). Finally, we compare our results to upper bound given.

Solving Traveling Salesmen Problem using Ant Colony Optimization Algorithm

Ant Colony Optimization (ACO) is a relatively new meta-heuristic and successful technique in the field of swarm intelligence. This technique was first introduced by Dorigo and his colleagues. This technique is used for many applications especially problems that belong to the combinatorial optimization. 
travelling salesman problem algorithm

ACO algorithm models represent the behavior of real ant colonies in establishing the shortest path between food sources and nests. The ants release pheromone on the ground while walkingfrom their nest to food and then go back to the nest. The ants move according to the richer amount of pheromones on their path and other ants would be followed and will tend to choose a shorter path which would have a higher amount of pheromone. Artificial ants imitate the behavior of real ants, but can solve much more complicated problem than real ants can.

ACO has been widely applied to solving various combinatorial optimization problems such as traveling salesman problem (TSP), job-shop scheduling problem (JSP), vehicle routing problem (VRP), quadratic assignment problem (QAP), etc. Although ACO has a powerful capacity to find out solutions to combinatorial optimization problems, it has the problems of stagnation, premature convergence and the convergence speed of ACO is always slow. These problems will be more obvious when the problem size increases.

Get PDF Link Here

Wednesday, 3 August 2016

Task Scheduling in Parallel Processing: Analysis

Task scheduling in parallel processing is a technique in which processes are assigned to different processors. Task scheduling in parallel processing use different types of algorithms and techniques which are used to reduce the number of delayed jobs. Now a days there are different kind of scheduling algorithms and techniques used to reduce the execution time of tasks. As task scheduling the NP-hard problem and no one can say the about the best algorithm proposed so in this paper we will review some of the task scheduling algorithms and other techniques.

Parallel processing

Parallel processing is dividing the process into multiple processes and execute them concurrently by the use of more than one CPU or processor. Before dividing the process it is checked whether the process is divisible or not, if it is not then the process is executed as a whole and if it is divisible then these processes can be mapped among the processors separately, after execution these processes are reassembled and finally the processed is completed as shown in Figure 1.

Parallel processing is used due to some reasons, i.e. it provide concurrency, save time, solve larger problems, maximize load balancing and make a good use of parallel hardware architecture . In multiprocessor environment parallel processing has two kinds of processorsheterogeneous and homogeneous , in heterogeneous the processors are of different kind of speed and cost while in homogenous there are same kind of processors in all perspective  as shown in Table 1. By adding extra processors it is possible to reduce the execution time of a task.