Friday 30 December 2016

Nitrogen Dispersion

A mathematical model of free surface is presented for the study of the nitrogen injection of the gas cap for maintain of pressure, we used one model dispersion a flow in well zone. Nitrogen injected into the cap has a density greater than the original gas which has rushed to the gas-oil contact goc. Since the goc on the border there is a lot of nitrogen gas, it will spread and disperse into the oil. 

physical mathematics journal
In the goc around a well about to be invaded by gas coning phenomena occur; therefore the model to interpret the output of the well must include the phenomenon of free surface. This paper proposes a method to measure the degree of contamination of the oil zone. This compositional data based on a production well before being invaded in producing interval by the gas cap. The most contaminated area of oil is close to the goc, as nitrogen is expected to exit the current production from the well just before being overcome by gas.

Thursday 29 December 2016

A Meeting of Great Minds, Sophus Lie and John Nash throughout their Works

It is well known that Marius Sophus Lie (1842-1899) and John Forbes Nash (1928-2015) are great mathematicians. Sophus Lie comes from Norway and John Nash from United States of America. Their stories have certain resemblances and remarkable relations. This editorial would emphasize some of them. When they have started their university studies, their respective first interests were not mathematics.

journal lie theory impact factor
That is to say, Lie has been in Astronomy and Nash in Chemical Engineering. Whereas,when they worked on mathematics, the first had Lobatchevski award in 1897 and the second, Nobel prize 1994 and Abel award 2015 (Niels Abel is the uncle of the wife of Sophus Lie: Anna Birch). In addition, their contributions in geometry are considerable, particularly in differential equations. Lie worked on transformation groups relative to partial differential equations, in other words, on Lie groups and on special non-associative algebras named Lie algebras.

Wednesday 28 December 2016

Advances in Logic, Operations and Computational Mathematics

Journal of Applied & Computational Mathematics Volume 5, Issue 2 comprised of 7 research articles and 4 opinion articles and is focused on the innovation of polygon, Euler, linear and non-linear equations.

computational applied mathematics impact factor
EL-Kholy et al., in their research article discussed about balanced folding over a polygon and Euler numbers. The study proved that for a balanced folding of a simply connected surface M, there is a subgroup of the group which is called all homeomorphisms of M that will acts 1- transitively on the 2-cells of M.

Gil et al., in their research have reported about the exponentially stabile non-linear, non-autonomous multi variable discrete systems. Based on the recent estimates on matrix equations, the findings suggest that a class of non-autonomous discrete-time systems is governed by semi-linear vector difference equations along with slowly varying linear parts.

Monday 26 December 2016

Models of damped oscillators in quantum mechanics

We consider several models of the damped oscillators in nonrelativistic quantum mechanics in a framework of a general approach to the dynamics of the time-dependent Schr¨odinger equation with variable quadratic Hamiltonians. The Green functions are explicitly found in terms of elementary functions and the corresponding gauge transformations are discussed. 

physical mathematics journal
The factorization technique is applied to the case of a shifted harmonic oscillator. The time evolution of the expectation values of the energy-related operators is determined for two models of the quantum damped oscillators under consideration. The classical equations of motion for the damped oscillations are derived for the corresponding expectation values of the position operator.

Friday 2 December 2016

On the Use of P-Values in Genome Wide Disease Association Mapping

In hypothesis testing, p-value is routinely used as a measure of statistical evidence against the null hypothesis, where a smaller p-value indicates stronger evidence substantiating the alternative hypothesis. P-value is the probability of type-I error made in a hypothesis testing, namely, the chance that one falsely reject the null hypothesis when the null holds true. In a disease genome wide association study (GWAS), p-value potentially tells us how likely a putative disease associated variant is due to random chance. For a long time p-values have been taken seriously by the GWAS community as a safeguard against false positives. 

genome research impact factor
Every disease-associated mutation reported in a GWAS must reach a stringent p-value cut off (e.g., 10-8) in order to survive the multiple testing corrections. This is reasonable because after testing millions of variants in the genome, some random variants ought to yield small p-values purely by chance. Despite of p-value’s theoretical justification, however, it has become increasingly evident that statistical p-values are not nearly as reliable as it was believed.

Friday 25 November 2016

Pressure the Universe

In physics, mass–energy equivalence is a concept formulated by Albert Einstein that explains the relationship between mass and energy. It states every mass has an energy equivalent and vice versa. The recognition of the concept of mass–energy equivalence requires the recognition of the existence of force of pressure of the universe.

mathematics journals
Pressure of the universe is the cause of all the movements taking place in the real world. The greatest it proves itself within the corpuscle, where under his influence shaped portion of energy that defines its mass. Let's deduce concept Universe pressure from the formula of mass-energy equivalence.

Friday 18 November 2016

Control of Road Traffic Using Learning Classifier System

controlling Road Traffic
Road traffic control through proper control of junction signals is one of the complicated control issues. A conventionally used system is the rule-based system which is often employed in designing systems with deterministic states. In this paper, we have tried to study the control issue with the idea of distributed control through using Learning Classifier Systems (LCS). It means, controlling signal of any junction is done separately from other junctions through an independent Learning Classifier System and with the purpose of decreasing the lines of automobiles queue in the conduced streets to the junction. Furthermore, learning classifier systems have been used in order to control junction traffic within distributed control system.

Wednesday 16 November 2016

The Use of Molecular and Imaging Biomarkers in Lung Cancer Risk Prediction

In hypothesis testing, p-value is routinely used as a measure of statistical evidence against the null hypothesis, where a smaller p-value indicates stronger evidence substantiating the alternative hypothesis. P-value is the probability of type-I error made in a hypothesis testing, namely, the chance that one falsely reject the null hypothesis when the null holds true. 


Imaging Biomarkers in Lung Cancer
In a disease genome wide association study (GWAS), p-value potentially tells us how likely a putative disease associated variant is due to random chance. For a long time p-values have been taken seriously by the GWAS community as a safeguard against false positives. Every disease-associated mutation reported in a GWAS must reach a stringent p-value cutoff (e.g., 10-8) in order to survive the multiple testing corrections. This is reasonable because after testing millions of variants in the genome, some random variants ought to yield small p-values purely by chance.

Friday 11 November 2016

On the Use of P-Values in Genome Wide Disease Association Mapping

In hypothesis testing, p-value is routinely used as a measure of statistical evidence against the null hypothesis, where a smaller p-value indicates stronger evidence substantiating the alternative hypothesis. P-value is the probability of type-I error made in a hypothesis testing, namely, the chance that one falsely reject the null hypothesis when the null holds true. In a disease genome wide association study (GWAS), p-value potentially tells us how likely a putative disease associated variant is due to random chance. For a long time p-values have been taken seriously by the GWAS community as a safeguard against false positives. 

Genome Wide Disease
Every disease-associated mutation reported in a GWAS must reach a stringent p-value cutoff in order to survive the multiple testing corrections. This is reasonable because after testing millions of variants in the genome, some random variants ought to yield small p-values purely by chance. Despite of p-value’s theoretical justification, however, it has become increasingly evident that statistical p-values are not nearly as reliable as it was believed. 

Thursday 10 November 2016

A mathematical model that can describe the human emotion communications

Search Algorithm

Recently scientists developed a mathematical pattern to describe the human emotion communications and interactions required for the perception and recognition of environment. This mathematical model is capable of defining functional distortion during activity emotions.

Monday 7 November 2016

On Discretizations of the Generalized Boole Type Transformations and their Ergodicity

The Frobenius-Perron Operator and Its Discretization:

We consider an m-dimensional; not necessary compact; C1- manifold Mm, endowed with a Lebesgue measure μ determined on the σ-algebra of Borel subsets of Mm and Ï•: Mm→Mm being an almost everywhere smooth mapping. The related Frobenius-Perron operator

Boole Type Transformations

is defined by means of the integral relationship

for any and all Î¼-measurable subsets A⊂Mm Equivalently it can be defined as a mapping on the measure space (Mm)

Friday 4 November 2016

Properties of Nilpotent Orbit Complexification

Nilpotent Orbit

Real and complex nilpotent orbits have received considerable attention in the literature. The former have been studied in a variety of contexts, including differential geometry, symplectic geometry, and Hodge theory.  Also, there has been some interest in concrete descriptions of the poset structure on real nilpotent orbits in specific cases. By contrast, complex nilpotent orbits are studied in algebraic geometry and representation theory — in particular, Springer Theory.


Attention has also been given to the interplay between real and complex nilpotent orbits, with the Kostant-Sekiguchi Correspondence being perhaps the most famous instance. Accordingly, the present article provides additional points of comparison between real and complex nilpotent orbits. Specifically, let g be a finite-dimensional semisimple real Lie algebra with complexification g Each real nilpotent orbit.

Thursday 3 November 2016

About the ggplot2 Package

"ggplot2 is an R package for producing statistical, or data, graphics, but it is unlike most other graphics packages because it has a deep underlying grammar. This grammar, based on the Grammar of Graphics, is composed of a set of independent components that can be composed in many different ways. Plots can be built up iteratively and edited later. A carefully chosen set of defaults means that most of the time you can produce a publication-quality graphic in seconds, but if you do have special formatting requirements, a comprehensive theming system makes it easy to do what you want.

ggplot2 Package
ggplot2 is designed to work in a layered fashion, starting with a layer showing the raw data then adding layers of annotation and statistical summaries."

"ggplot2 is a plotting system for R, based on the grammar of graphics, which tries to take the good parts of base and lattice graphics and none of the bad parts. It takes care of many of the fiddly details that make plotting a hassle (like drawing legends) as well as providing a powerful model of graphics that makes it easy to produce complex multi-layered graphics."

Tuesday 1 November 2016

Statistical Methods in Trials with Sequential Parallel Design for Trials with High Placebo Response

Strong placebo response has been problematic in central nervous system (CNS) clinical trials, leading to a reduced drug effect and thus resulting in decrease in probability of finding an effective drug. The ideal situation is to have comparative data collected only from subjects who are placebo non-responders. Stringent trial procedures together with enrichment of placebo non-responders are some of the ways to decrease placebo response in clinical trials. 

High Placebo Response
Fava et al. (2003) proposed a SPD where subjects are only randomized during Period 1. Accordingly, some placebo non-responders in Period 1 continue on placebo in Period 2 and others switch to drug in Period 2; and subjects who are treated with drug in Period 1 would continue to receive drug in Period 2. Treatment sequences for all subjects are all pre-specified prior to trial start; and data from Period 2 for subjects who are on drug in both periods are for safety evaluations only. 

Wednesday 26 October 2016

Experiments done by researchers to classify the dark matter universe

According to the scientific literature, 96% of the total amount of matter in the universe has been considered as dark matter. It consistently fills the entire universe and cannot be identified with any of the noticeable celestial bodies. Thus, it is named as dark matter because it is imperceptible.

Dark Matter Universe
The history of the existence of dark matter is such. The American astrophysicists A. Penzias and R. Vilson had found in the horn receiving antenna of a radio telescope the weak no vanishing background of the extra-terrestrial origin. It had not dependent on the orientation of the antenna. This radiation is called the relic. After its opening in 1968 to Penzias Vilsonu and were awarded the Nobel Prize in Physics.

Tuesday 25 October 2016

The ABCs of the Mathematical Infinitology. Principles of the Modern Theory and Practice of Scientific-and-Mathematical Infinitology

Mathematical Infinitology
In any, praiseworthy hobby, business or the craft, being appeared at the human persons for a long time process of evolution, and thanks to the mental and creative abilities growth, sometimes among the advanced people were developed such high spheres of human knowledge or personal skills or intellectual abilities, that a lot of centuries and even the millenniums came or passed away, before some difficult scientific idea or the secrets of the craft could be at last found their final decisions or they were transformed by the human individuals into such form of the representation or embodiment, available for their natural perception by people, specialists or scientists, that a team of higher skilled experts could only recognize this or that decision as a perfect standard.

It isn't necessary to go far very much for the examples! The most ancient and the unresolved task is a secret of natural prime numbers, the cornerstone of the scientific theory of their knowledge and studying was put by Eratosphen Kirensky, the Ancient Greece mathematician, being lived in the III century B.C. The knowledge by the human persons of the Great truths of the World was always, from the time of immemorial destiny, the elite of possessing advanced thinkers being had a rich life experience. 

Monday 24 October 2016

Simulations of Three-dimensional Second Grade Fluid Flow Analysis in Converging-Diverging Nozzle

An analysing flow pattern in a converging-diverging nozzle has been one of interesting topic in computational fluid dynamics. There are numerous applications of this flow phenomenon in aerospace and engineering sciences. Such processes are difficult to handle analytically due to complex mathematical model associated to the flow and ensuing instabilities carried by flow parameters. Looking back to the history Jaffery and Hamel, in their studies considered the converging diverging channel steady two dimensional Newtonian fluid flow. 

Converging-Diverging Nozzle
They observed quiet interesting results by treating Navier-Stokes equations with similarity transforms. Further developments were presented in Schlichtinh and Batchelor based on the boundary layer approximations. Makinde examined the in compressible Newtonian fluid flow by incorporation of linearly diverging symmetrical channel. Recently, Zarqa et al. performed approximate analytical analysis using Adomian decomposition method for a channel with variable diverging ratio. 

Friday 21 October 2016

Check how the data analytics can impact healthcare

Big Data Analytics on Healthcare

Big Data: Broadly data that is difficult to collect, store or process within the conventional systems is termed as big data. Big data is used to describe data that is high volume, high velocity, and high variety. It requires new technologies and techniques to capture, store, and analyze aiming to enhance decision making, provide insight and discovery. 

Volume refers to the amount of data expressed in terabytes and, petabytes of data. Variety refers to the number of types of data that includes unstructured data in the form of text, video, audio, click streams, 3D data and log files. Velocity on the other hand refers to the speed of data processing data streams from mobile devices, click streams, high-frequency stock trading, and machine-to-machine processes is massive and continuously fast moving.

Thursday 20 October 2016

Opinion on Adaptive Designs in Clinical Trials

Starting from a couple of research papers in this type published in Europe’s peer-reviewed statistical journals, adaptive designs have made much progress in the development and implementation in the past 20 years, which are anticipated to increase the information value of clinical trial data in order to enable better decisions during the course and speed up the development process in the context of fierce competition and limited trial budgets. 

Adaptive Designs in Clinical Trials
So far for now, main types of adaptive deigns are: 1) adaptive randomization which allows changing randomization probabilities using information from past treatment assignment (such as the biased coin design), or covariate-adaptive, or response-adaptive or covariate-adjusted-adaptive; 2) adaptive dose response designs; 3) sample size re-estimation; 4) Treatment selection designs; 5) group sequential designs. All areas in this topic are undergone active development because analytic derivations are not well investigated for many methods. 

Monday 17 October 2016

Nonholonomic Ricci Flows of Riemannian Metrics and Lagrange-Finsler Geometry

A series of the most remarkable results in mathematics are related to Grisha Perelman’s proof of the Poincare Conjecture built on geometrization (Thurston) conjecture  for three dimensional Riemannian manifolds, and R. Hamilton’s Ricci flow theory see reviews and basic references explained by Kleiner. Much of the works on Ricci flows has been performed and validated by experts in the area of geometrical analysis and Riemannian geometry. Recently, a number of applications in physics of the Ricci flow theory were proposed, by Vacaru. 

Lagrange-Finsler Geometry
Some geometrical approaches in modern gravity and string theory are connected to the method of moving frames and distributions of geometric objects on (semi) Riemannian manifolds and their generalizations to spaces provided with nontrivial torsion, nonmetricity and/or nonlinear connection structures. The geometry of nonholonomic manifolds and non–Riemannian spaces is largely applied in modern mechanics, gravity,cosmology and classical/quantum field theory explained by Stavrinos. 

Friday 14 October 2016

Heat Conduction: Hyperbolic Self-similar Shock-waves in Solid Medium

Heat Conduction
Analytic solutions for cylindrical thermal waves in solid medium are given based on the nonlinear hyperbolic system of heat flux relaxation and energy conservation equations. The Fourier-Cattaneo phenomenological law is generalized where the relaxation time and heat propagation coefficient have general power law temperature dependence. From such laws one cannot form a second order parabolic or telegraph-type equation. We consider the original non-linear hyperbolic system itself with the self-similar Ansatz for the temperature distribution and for the heat flux. As results continuous and shock wave solutions are presented. For physical establishment numerous materials with various temperature dependent heat conduction coefficients are mentioned. 

Thursday 13 October 2016

A Search Algorithm

This is an algorithm which has the same time complexity as that of linear search of “O (n)”. But still it is better than “linear search” in terms of execution time. Let A[ ] be the array of some size N. If the element which we want to search is at any position before “N/2” than “my-search and linear-search” both will have execution time, but the magic happens when the search element is after “N/2” position. 

Search Algorithm
Suppose the element want to search is at nth position, then using the linear search will find the element after nth iteration, but using “my-search” we can search the element after 1st iteration itself. Elements in (N-i)th position can be found in the (I+1)th iteration i.e. suppose size is 1000 than element in 1000th position can be found in 1st iteration, similarly 999 in 2nd iteration and process goes on like this.

Wednesday 12 October 2016

Biomechanical Modeling of Human Body Movement

The capabilities of the human body motion seem endless, through the long evolutionary process. The progresses made from the first step of a baby to an Olympic performance suggest that human movements have attained perfection in their specialized functions. However, the ability to predict how the whole body will move and how it will exchange forces with environment is becoming very vital for performances optimization or development of devices or safety; particularly in the fields of research of sport sciences, ergonomics, safety, clinical sciences and industries.

Biomechanical Modeling
Modelling human body motion is a huge issue due to the requirement of multifaceted researches obviously extremely diverse to apply. Indeed, they require the understanding of internal/external biological and physical principles that make possible and guide human movement and coordination, as well as, the capacity of giving them a realistic representation with high-fidelity. Since over 30 years of research Biomechanics, the research area studying human motion has undertaken progress in the modelling human motion. But the results are mitigated. The purpose of this review is to report the state of knowledge and progress of the biomechanics regarding its application to the field of sport.

Friday 7 October 2016

Nonholonomic Ricci Flows of Riemannian Metrics and Lagrange-Finsler Geometry

A series of the most remarkable results in mathematics are related to Grisha Perelman’s proof of the Poincare Conjecture built on geometrization (Thurston) conjecture for three dimensional Riemannian manifolds, and R. Hamilton’s Ricciflow theory see reviews and basic references explained by Kleiner. Much of the works on Ricci flows has been performed and validated by experts in the area of geometrical analysis and Riemannian geometry. Recently, a number of applications in physics of the Ricci flow theory were proposed, by Vacaru. 

Riemannian Metrics
Some geometrical approaches in modern gravity and string theory are connected to the method of moving frames and distributions of geometric objects on (semi) Riemannian manifolds and their generalizations to spaces provided with nontrivial torsion, nonmetricity and/or nonlinear connection structures. Thegeometry of nonholonomic manifolds and non–Riemannian spaces is largely appliedin modern mechanics, gravity, cosmology and classical/quantum field theory expained by Stavrinos. 

Thursday 6 October 2016

High-order Accurate Numerical Methods for Solving the Space Fractional Advection-dispersion Equation

The fractional advection-dispersion equation (FADE) is a generalization of the classical advection-dispersion equation (ADE). It provides a useful descriptionof transport dynamics in complex systems which are governed by anomalousdiffusion and nonexponential relaxation. The FADE was firstly proposed by Chaves to investigate the mechanism of super diffusion and with the goal of having a model able to generate the L´evy distribution and was later generalized by Benson et al and has since been treated by numerous authors. Many numerical methods have been proposed for solving the FADE.

Numerical Methods
Meerschaert and Tadjeran developed practical numerical methods to solve the one-dimensional space FADE with variable coefficients. Liu et al transformed the space fractional Fokker-Planck equationinto a system of ordinary differential equations (method of lines), which wasthen solved using backward differentiation formulas. Liu et al proposed an implicit difference method (IDM) and an explicit difference method (EDM) to solve a space-time FADE. Liu et al. presented a random walk model for approximating a L´evy-Feller advection-dispersion process and proposed an explicit finite difference approximation (EFDA). 

Thursday 29 September 2016

A Bayesian Nonlinear Mixed-Effects Disease Progression Model

Bayesian Nonlinear Mixed-Effects Disease
A nonlinear mixed-effects approach is developed for disease progression models that incorporate variation in age in a Bayesian framework. We further generalize the probability model for sensitivity to depend on age at diagnosis, time spent in the preclinical state and sojourn time. The developed models are then applied to the Johns Hopkins Lung Project data and the Health Insurance Plan for Greater New York data using Bayesian Markov chain Monte Carlo and are compared with the estimation method that does not consider random-effects from age. Using the developed models, we obtain not only age-specific individual-level distributions, but also population-level distributions of sensitivity, sojourn time and transition probability.

Wednesday 28 September 2016

Analytical Modeling Enables One to Explain Paradoxical Situations in Behavior and Performance of Electronic Product Materials

Merits, attributes and challenges associated with the application of analytical (mathematical) predictive modeling in electronics materials science and engineering are addressed, based mostly on the author’s research during his tenure with Basic Research, Bell Laboratories and then – with UC-Santa Cruz and Portland State University, Portland, OR, USA. 

Analytical Modeling
The emphasis is on some practically important, yet paradoxical (i.e., intuitively non-obvious), materials reliability- related situations/ phenomena in electronics and optics.It is concluded that all the three basic approaches in Microelectronics and Photonics Materials Science and Engineering - analytical (mathematical) modeling, numerical modeling (simulation) and experimental investigations - are equally important in understanding the physics of the materials behavior and in designing, on this basis, viable and reliable electronic devices and products. As they say, if your only tool is a hammer, all the problems look like nails to you, do they not?

Tuesday 27 September 2016

A Class of Non-associative Algebras Including Flexible and Alternative Algebras, Operads and Deformations

Nonassociative Algebras

There exist two types of non-associative algebras whose associator satisfies a symmetric relation associated with a 1-dimensional invariant vector space with respect to the natural action of the symmetric group Σ3. The first one corresponds to the Lie-admissible algebrasand this class has been studied in a previous paper of Remm and Goze. Here we are interested by the second one corresponding to the third power associative algebras.


Recently, we have classified for binary algebras, Cf., relations of no associativity which are invariant with respect to an action of the symmetric group on three elements Σ3 on the associator. In particular we have investigated two classes of no associativealgebras.

Saturday 24 September 2016

Parametric Resonance Applications in Neutrophil Dynamics

Neutrophil Dynamics
The profound effects of chemotherapy on the combined dynamics of the haematological stem cells and their differentiated neutrophils are examined. G-CSF is often used to deal with this neutropenia andthe response is highly variable. To shape the neutrophil response to chemotherapy and G-CSF, periodic parametric resonance is discussed. Periodic oscillation in neutrophil levels and the sub harmonic 1:2 resonance phenomena are observed with the assumption of periodic chemotherapy is given. The work is aim to stimulate further investigations and the practical applications.

Friday 23 September 2016

Reasons for drug resistance in ESBL enzymes

drug resistance in ESBL enzymes
Extended- spectrum beta- lactamases are the enzymes that have resistance against antibiotics. The spread ofantimicrobial resistance by ESBL in humans is due to edible animal meat. When these enzymes were isolated from different types of meat and tested for antibiotic susceptibility, antibiotic resistance was observed. When isolated ESBL producers subjected to PCR, the major genes of ESBL such as BLA (CTX-M, OXA, PER and GES) which are going to more resistant to three generation of cephalosporin were amplified. It is crucial and very important to follow specific treatment to control drug resistance.

Thursday 22 September 2016

A New Number Theory - Algebra Analysis

Number Theory

The article proposes some analytical considerations about the 3d algebra, and the possibilities of an extension in 3d of some standard 2d analytical functions. It takes also inconsideration some problems about the derivative and the integrals. thetransformations between the polar notation and the cartesian notation of the point P (and vice versa) give a 3d algebra definition as an extension of the sum and product of the 2d standard complex algebra.

Tuesday 20 September 2016

Methods for Identifying Differentially Expressed Genes: An Empirical Comparison

Microarray technology, which observes thousands of gene expressions at once, is one of the popular topics in recent decades. When it comes to the analysis of microarraydata to identify differentially expressed (DE) genes, many methods have been proposed and modified for improvement. 

Genes
However, the most popular methods such as Significance Analysis of Microarrays (SAM), samroc, fold change, and rank product are far from perfect. In order to determine which method is most powerful, it comes down to the characteristics of the sample and distribution of the gene expressions. The most practiced method is usually SAM or samroc butwhen the data tends to be skewed, the power of these methods decreases. With the concept that the median becomes a better measure of central tendency than the mean when the data is skewed, the test statistics of the SAM and fold change methods are modified in this paper. This study shows that the median modified fold change method improves the power for many cases when identifying DE genes if the data follows a lognormal distribution.

Monday 19 September 2016

On De Broglie’s Double-particle Photon Hypothesis

De Broglie's Double-particle Photon Hypothesis
Establishment of an LC equation and of a local fields equation describing permanently localized photons from the analysis of kinetic energy circulation within the energy structure of the double-particle photon that Louis de Broglie hypothesized in the early 1930's. Among other interesting features, these equations provide a mechanical explanation to the localized photon properties of self-propelling at the speed of light and of self-guiding in straight line when no external interaction tends to deflect its trajectory. This paper summarizes the seminal considerations that led to the development of the 3-spaces model.

Friday 16 September 2016

Thursday 15 September 2016

Generalizing Two Structure Theorems of Lie Algebras to the Fuzzy Lie Algebras

Lie algebras were proposed by Sophus Lie and there are many applications of them in several branches of physics. The notion of fuzzy sets was introduced by Zadeh and manymathematicians have been involved in extending the concepts and results ofabstract Lie algebra to fuzzy theory. 

Lie Algebras
This paper is the continuation of the results obtained in, where we presented conditions to generalize the concepts of solvable and nilpotent radicals of Lie algebras (called of solvable and nilpotent fuzzy radicals, respectively) to a class of fuzzy Lie algebras. Inthis article we use the solvable fuzzy radical to generalize the structuretheorem of semisimple Lie algebras and the Levi’s decomposition theorem to aclass of the fuzzy Lie algebras. The results presented in this paper are still strongly connected with results proved in.

Wednesday 14 September 2016

Sentiment Patterns

Even apart from the instability due to speculation, there is the instability due to the characteristic of human nature that a large proportion of our positive activities depend on spontaneous optimism rather than mathematical expectations, whether moral or hedonistic or economic. Most, probably, of our decisions to do something positive, the full consequences of which will be drawn out over many days to come, can only be taken as the result of animal spirits—a spontaneous urge to action rather than inaction, and not as the outcome of a weighted average of quantitative benefits multiplied by quantitative probabilities.

Sentiment Patterns
Human action is, in a great extent, predictable. Humans arerational and endowed with the ability to weigh benefits and costs in search forthe best possible expected outcome. Despite this straightforward evidence, in many circumstances involving decision-making there are evident departures relatively to the strict rational behavior. The complexity of the problems faced by individuals often compels them to adopt simple heuristics, to engage in strategic complementarities and to decide based on instincts or sentiments. 

Tuesday 13 September 2016

The Traditional Ordinary Least Squares Estimator under Collinearity

In a multiple regression analysis, it is usually difficult to interpret the estimator of the individual coefficients if the explanatory variables are highly inter-correlated. Such aproblem is often referred to as the multicollinearity problem. There exist several ways to solve this problem. One such way is ridge regression. Two approaches of estimating the shrinkage ridge parameter k are proposed. 
Least Squares Estimator under Collinearity

Comparison is made with other ridge-type estimators. To investigate the performance of our proposed methods with the traditional ordinary least squares (OLS) and the other approaches for estimating the parameters of the ridge regression model, we calculate the mean squares error (MSE) using thesimulation techniques. Results of the simulation study shows that the suggested ridge regression outperforms both the OLS estimator and the other ridge-type estimators in all of the different situations evaluated in this paper.

Monday 12 September 2016

How SI Units Hide the Equal Strength of Gravitation and Charge Fields

The use of SI units in their existing form hides that gravity is not the weakest force. The paper shows through symmetry arguments that Planck’s constant h and the Gravitational constant G are both dimensionless ratios when dimensional analysis is used at property levels deeper than mass, length and time. The resultant adjustments shown to be needed for SI unitsproduce much simpler sets of units which also solve the issue of why magneticfield H and magnetic inductance B have not previously had the same units. 
Gravitation and Charge Fields

The result shows that gravitational and charge fields have the same strengths when considered in fractional adjusted-Planck values. By showing that h and G are dimensionless, they can be understood to be unit-dependent ratios which can be eliminated from all equations by merging them within new adjusted SI units. Theimplications are that mass and charge sizes, and distance, are not theproperties which separate quantum and classical gravitational systems. The equivalence of gravitational and inertial mass is also shown. The new type of dimensional analysis shows how to uncover any law of nature or universal constant and that the current set of properties of nature is missing two from the set, whose dimensions and units can be inferred.