Limits...
A Hybrid Optimization Method for Solving Bayesian Inverse Problems under Uncertainty.

Zhang K, Wang Z, Zhang L, Yao J, Yan X - PLoS ONE (2015)

Bottom Line: The goal of history matching is to identify the minimum value of an objective function that expresses the misfit between the predicted and measured data of a reservoir.The optimization is constrained by a linear equation that contains the reservoir parameters.We reformulate the reservoir model's parameters and dynamic data by operating the objective function, the approximate gradient of which can guarantee convergence.

View Article: PubMed Central - PubMed

Affiliation: China University of Petroleum, 66 Changjiang West Road, Qingdao, Shandong, 266555, China.

ABSTRACT
In this paper, we investigate the application of a new method, the Finite Difference and Stochastic Gradient (Hybrid method), for history matching in reservoir models. History matching is one of the processes of solving an inverse problem by calibrating reservoir models to dynamic behaviour of the reservoir in which an objective function is formulated based on a Bayesian approach for optimization. The goal of history matching is to identify the minimum value of an objective function that expresses the misfit between the predicted and measured data of a reservoir. To address the optimization problem, we present a novel application using a combination of the stochastic gradient and finite difference methods for solving inverse problems. The optimization is constrained by a linear equation that contains the reservoir parameters. We reformulate the reservoir model's parameters and dynamic data by operating the objective function, the approximate gradient of which can guarantee convergence. At each iteration step, we obtain the relatively 'important' elements of the gradient, which are subsequently substituted by the values from the Finite Difference method through comparing the magnitude of the components of the stochastic gradient, which forms a new gradient, and we subsequently iterate with the new gradient. Through the application of the Hybrid method, we efficiently and accurately optimize the objective function. We present a number numerical simulations in this paper that show that the method is accurate and computationally efficient.

No MeSH data available.


The convergence of the objective function.A:The objective function value versus function evaluations. B: The objective function value versus function iterations.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4529104&req=5

pone.0132418.g003: The convergence of the objective function.A:The objective function value versus function evaluations. B: The objective function value versus function iterations.

Mentions: Fig 3(A) shows that the objective function value will decrease in both Algorithm II and the Hybrid algorithm. This finding means that the objective function also converges with the Hybrid algorithm. From the above Fig, we can find that the Hybrid algorithm behaves better than Algorithm II and has a higher rate of convergence. At approximately the 400th computation time in the Hybrid algorithm, the objective function value is close to the final value, while Algorithm II requires probably over 800 computation times to reach the same value. Thus, the Hybrid algorithm reaches the same result of convergence as Algorithm II with less simulation runs.


A Hybrid Optimization Method for Solving Bayesian Inverse Problems under Uncertainty.

Zhang K, Wang Z, Zhang L, Yao J, Yan X - PLoS ONE (2015)

The convergence of the objective function.A:The objective function value versus function evaluations. B: The objective function value versus function iterations.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4529104&req=5

pone.0132418.g003: The convergence of the objective function.A:The objective function value versus function evaluations. B: The objective function value versus function iterations.
Mentions: Fig 3(A) shows that the objective function value will decrease in both Algorithm II and the Hybrid algorithm. This finding means that the objective function also converges with the Hybrid algorithm. From the above Fig, we can find that the Hybrid algorithm behaves better than Algorithm II and has a higher rate of convergence. At approximately the 400th computation time in the Hybrid algorithm, the objective function value is close to the final value, while Algorithm II requires probably over 800 computation times to reach the same value. Thus, the Hybrid algorithm reaches the same result of convergence as Algorithm II with less simulation runs.

Bottom Line: The goal of history matching is to identify the minimum value of an objective function that expresses the misfit between the predicted and measured data of a reservoir.The optimization is constrained by a linear equation that contains the reservoir parameters.We reformulate the reservoir model's parameters and dynamic data by operating the objective function, the approximate gradient of which can guarantee convergence.

View Article: PubMed Central - PubMed

Affiliation: China University of Petroleum, 66 Changjiang West Road, Qingdao, Shandong, 266555, China.

ABSTRACT
In this paper, we investigate the application of a new method, the Finite Difference and Stochastic Gradient (Hybrid method), for history matching in reservoir models. History matching is one of the processes of solving an inverse problem by calibrating reservoir models to dynamic behaviour of the reservoir in which an objective function is formulated based on a Bayesian approach for optimization. The goal of history matching is to identify the minimum value of an objective function that expresses the misfit between the predicted and measured data of a reservoir. To address the optimization problem, we present a novel application using a combination of the stochastic gradient and finite difference methods for solving inverse problems. The optimization is constrained by a linear equation that contains the reservoir parameters. We reformulate the reservoir model's parameters and dynamic data by operating the objective function, the approximate gradient of which can guarantee convergence. At each iteration step, we obtain the relatively 'important' elements of the gradient, which are subsequently substituted by the values from the Finite Difference method through comparing the magnitude of the components of the stochastic gradient, which forms a new gradient, and we subsequently iterate with the new gradient. Through the application of the Hybrid method, we efficiently and accurately optimize the objective function. We present a number numerical simulations in this paper that show that the method is accurate and computationally efficient.

No MeSH data available.