FIULogo_H_CMYKsmaller
Department of Mechanical and Materials Engineering

 

This is Dr. Levy’s EGM5346 Computational Engineering Analysis Spring 2019 page

 

Florida International University is a community of faculty, staff and students dedicated to generating and imparting knowledge through 1) excellent teaching and research, 2) the rigorous and respectful exchange of ideas, and 3) community service. All students should respect the right of others to have an equitable opportunity to learn and honestly demonstrate the quality of their learning. Therefore, all students are expected to adhere to a standard of academic conduct, which demonstrates respect for themselves, their fellow students, and the educational mission of the University. All students are deemed by the University to understand that if they are found responsible for academic misconduct, they will be subject to the Academic Misconduct procedures and sanctions, as outlined in the Student Handbook.

 

The FIU Civility Initiative is a collaborative effort by students, faculty, and staff to promote civility as a cornerstone of the FIU Community. We believe that civility is an essential component of the core values of our University. We strive to include civility in our daily actions and look to promote the efforts of others that do the same. Show respect to all people, regardless of differences; always act with integrity, even when no one is watching; be a positive contributing member of the FIU community.

 

Here is the (01/08/2019) updated syllabus for the course.

 

My office is in EC3442, and email address is levyez@fiu.edu

My tel. no. is 305-348-3643. My fax no. is the department fax no. 305-348-1932

 

Office hours: M 10am-12pm and W 3pm-430pm or by appointment.

 

TA for this class: There is none—please see me with any questions.

 

 

In the first few lectures we will discuss what we plan to do for the semester (a review of the syllabus).  We will discuss the topic of error and why a computer will inevitably introduce errors in the process of its calculations.

 

 

 

This material and all the linked materials provided, except where stated specifically, are copyrighted © Cesar Levy 2019 and is provided to the students of this course only.  Use by any other individual without written consent of the author is forbidden.

 

 

We will explain the topic of roundoff error, instability in calculations and how one can minimize their effect.  We will show you how you can convert from base 10 to base 2, base 8, and any base.  We will convert whole numbers and decimals so that you can understand the process.

 

 

We will discuss root finding techniques by using the trial and error, the bisection, the regula falsi, the modified regula falsi methods. We discussed Descartes’ Rule of signs and Cauchy’s theorem for finding the search area. We discussed the pros and cons of each method.

 

 

We continue the topic of root finding techniques by discussing the secant, the newton raphson and the fixed point iteration methods and how to speed up fixed point using Aitken’s “delta squared” method and how to implement it using Steffensen’s Iteration technique.  We also discuss the pros and cons of each method.

 

 

We continue the discussion of root finding techniques by discussing Muller’s method.

Lecture (starting at the 3 min mark to the 28 min mark): We discuss the Newton method to solve for a set of simultaneous non-linear equations. 

We use this information to explain Bairstow’s method for finding real and complex roots of an equation with real coefficients.

 

 

Lecture (starting at the 15 minute mark): We discussed the solution of linear algebraic equations using Gaussian Elimination Method.  we discussed problems with the method and fixes-pivoting technique, partial pivoting, scaled partial pivoting techniques. 

 

 

Here is the modified document for the Gaussian Elimination Method with partial pivoting, which should work http://web.eng.fiu.edu/levy\images\EGM6422\gauss elim with partial pivot.pdf

Note that you are required to do physically do the row interchanges now, AND, the indices of the matrices no longer refer to the pivot vector as this produced incorrect results.  This should clear up any problems you might have had in applying the algorithm given in class.

 

 

Lecture: We discussed the PLU decomposition of the matrix and the compact scheme as a means of making “production runs with changing right hand side”.

 

 

 Lecture (from beginning to the 1 hr 17 min mark):  We continue the PLU decomposition discussion.  We discuss the forward and backward substitution algorithms to solve Ux=y and Ly=P-1b.  We discussed the error equations A(x-xo)=b-bo=r as a means of increasing the accuracy.    We discuss norms of vectors and norms of matrices with regards to how accurate to get our solution, namely, the definition of the one-norm, two norm, Euclidean norm, Frobenius norm, and the infinity norm.

 

 

This material and all the linked materials provided, except where stated specifically, are copyrighted © Cesar Levy 2019 and is provided to the students of this course only.  Use by any other individual without written consent of the author is forbidden.

 

 

Lecture: We discuss the eigenvalue problem Ax=lx and show how the power method works.  We discuss how the eigenvectors form a spanning set and define a spanning set.  We also discussed the limitations to the power method, i.e., when two eigenvalues are close, and what can be done to handle that case.

 

 

We discussed finding the other eigenvalues using Hotelling’s method based upon the Gram-Schmidt Orthogonalization method.  We discussed vibration type of problems where w2 Mx=Kx and casting the equation into a form of the power method and Hotelling’s method.

 

 

We discussed the method of Faddeev-Laverrier to find the characteristic equation and once the eigenvalues are found, how to get their corresponding eigenvectors.  We also began talking about interpolation and interpolating polynomials.

 

 

We discussed the Lagrange polynomial, its advantages and disadvantages.  We discussed the general form of the Newton interpolation formula, how to obtain its coefficients using the Divided Difference Table.  We also showed how to evaluate such a formula.  We also discussed how to determine whether a formula was accurate and how to add another point to the formula, i.e., how to increase its degree.

 

 

We discussed using lower order Newton formulas with restricted number of points because of the inaccuracy of the formula at the ends of the range of data points.  We discussed use of the Chebyshev points as a way of reducing the error at the ends.  We also discussed how to modify the use of the divided difference table if we wanted to introduce knowledge of higher derivatives into the Newton form.

 

 

We looked at spline forms such as Hermite and Cubic Spline as a way to use lower order formulas but many more of them, to span the range of data points that we have, and how their use would lead to solutions of algebraic equations that we would have to solve by previously discussed means.

 

 

We discussed least squares methods.  We also discussed through a handout when to use the different methods (least squares, interpolation formulas, spline methods, etc.).

 

 

This material and all the linked materials provided, except where stated specifically, are copyrighted © Cesar Levy 2019 and is provided to the students of this course only.  Use by any other individual without written consent of the author is forbidden.

 

 

We discussed how to do numerical integration.  We discussed the error per step size and the global error for the strip method, where the function is assumed constant in the strip; the trapezoidal method, where the function is assumed linear in the strip; the two Simpson’s methods where the function was considered quadratic in two strips and cubic in three strips.

We looked at how using lower order integration methods using different step sizes can lead to results as accurate as the higher methods (i.e., strip methods can be as accurate as the trapezoidal; trapezoidal methods can be as accurate as the Simpson methods) and called this Romberg Integration/Richardson Extrapolation to the limit.

 

 

We discussed the topic of numerical differentiation.  We showed how we can use the Taylor series expansion for f(xi + nDx) to obtain expressions for derivatives. We passed out a sheet containing the finite difference representation of the derivatives having errors that were in the form O(Dxp).  We showed how one can obtain the formulas through linear combinations of f(xi) and f(xi + nDx) knowing the order of the error we want to achieve.  We showed how we can use Richardson Extrapolation methodologies to get better and more accurate determination of the numerical derivatives.

 

 

We also showed how one can use the interpolation formulas (such as the Lagrange or Newton form) to obtain the derivatives p’(x). 

 

 

We also mentioned that the midterm will be completed by March 25 and will cover all the materials up and including integration and differentiation.

 

 

Lecture (starting at 29 min mark to 57 min mark): We begin the topic of the numerical solution of differential equations by using the results of the numerical integration discussed several lectures before.  We started discussion of solution of first order differential equations and derived the results called Euler Method, Modified Euler method and non-self-starting Euler method.

 

 

First exam was given as a take home exam to be returned by start of Monday’s class.

 

 

This material and all the linked materials provided, except where stated specifically, are copyrighted © Cesar Levy 2019 and is provided to the students of this course only.  Use by any other individual without written consent of the author is forbidden.

 

 

Lecture (start at the 1 hr 10 minute mark): We discussed the Runge-Kutta method and showed how the R-K method (2nd order ) is akin to the modified Euler method.  We also provided other formulas that lead to the 4th order R-K method, the R-K-F (Runge-Kutta-Fehlberg) method and how this method can be used for step size control. 

 

 

Lecture: We discussed the weakness in the Runge-Kutta method and the general forward marching methods leading into the multi-step methods such as Adams, Adams-Moulton methods.  We derived several of the Adams method formulas, and at least one Adams-Moulton method formula.  We discussed the error involved in each step and how to determine the overall error of mixed predictor-corrector methods.

 

 

Lecture: We discussed multistep methods such as Adams method, Adams-Moulton as a predictor-corrector method and the Milne Method.  We explained the advantages of these methods (using information we previously had), and the difficulties (difficult to implement step size control).

 

 

Lecture (starting at the 1 hr mark): We discussed the solution of difference equations and we showed how the solution of difference equations can tend to the solution of differential equations in some cases and produce extraneous answers in others.

 

 

The last class dealt with how one would solve the boundary value problem either by the shooting method or by solving the implicit matrix problem.  We also discussed various possibilities regarding which method to use to solve differential equations, depending on accuracy needed, important of number of functional evaluations etc.  Here is the video related to this lecture.

 

 

In the next lecture we started discussing the method of relaxation as related to iterative techniques used to solve implicit matrix problems, leading to the overrelaxation paradigm.  We also spoke about the relationship of the overrelaxation parameter to the number of iterations.

 

 

We then began speaking about solutions of partial differential equations- the similarities and differences between three types of PDEs, namely, the hyperbolic, parabolic and elliptic PDEs.  These are characterized by the wave equation (hyperbolic), the heat equation (parabolic) and the Laplace equation (elliptic).  

 


We begin by discussion Parabolic Differential Equations by discussion of Explicit and Implicit Methods, as well as Initial and Boundary Conditions, Limiting Conditions, and Convergence, Stability and Consistency, Well Posedness and Sufficiency
.  Here is the video of the lecture that discusses the numerical solution of the parabolic PDE characterized by the heat equation.  We show the Neumann stability analysis for the explicit Forward Difference in time O(Dt, Dx2) scheme; the implicit Backward Difference in time O(Dt, Dx2) scheme; and we started discussing the Centered Difference in time O(Dt2, Dx2) scheme.

 

 

This material and all the linked materials provided, except where stated specifically, are copyrighted © Cesar Levy 2019 and is provided to the students of this course only.  Use by any other individual without written consent of the author is forbidden.