Metadata-Version: 2.4
Name: PyOPUS
Version: 0.12
Summary: A simulation-based design optimization library
Author-email: Árpád Bűrmen <arpad.buermen@fe.uni-lj.si>
License-Expression: AGPL-3.0-or-later
Project-URL: homepage, http://fides.fe.uni-lj.si/pyopus/
Project-URL: documentation, http://fides.fe.uni-lj.si/pyopus/
Platform: Linux
Platform: Windows
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: numpy
Requires-Dist: scipy
Requires-Dist: matplotlib
Requires-Dist: greenlet
Requires-Dist: mpi4py
Requires-Dist: cvxopt
Requires-Dist: pyqtgraph
Requires-Dist: lxml
Requires-Dist: pyqt5
Requires-Dist: pandas
Requires-Dist: pyqt5
Dynamic: license-file

# Introduction 

PyOPUS is a library for simulation-based optimization of arbitrary systems. 
It was developed with circuit optimization in mind. The library is the basis 
for the PyOPUS GUI that makes it possible to setup design automation tasks with 
ease. In the GUI you can also view the the results and plot the waveforms 
generated by the simulator. 

PyOPUS provides several optimization algorithms (Coordinate Search, 
Hooke-Jeeves, Nelder-Mead Simplex, Successive Approximation Simplex, PSADE 
(global), MADS, ...). Optimization algorithms can be fitted with plugins that 
are triggered at every function evaluation and have full access to the 
internals of the optimization algorithm. 

PyOPUS has a large library of optimization test functions that can be used for 
optimization algorithm development. The functions include benchmark sets by 
Moré-Garbow-Hillstrom, Lukšan-Vlček (nonsmooth problems), Karmitsa (nonsmooth 
problems), Moré-Wild, global optimization problems by Yao, Hedar, and Yang, 
problems used in the development of MADS algorithms, and an interface to 
thousands of problems in the CUTEr/CUTEst collection. Benchmark results can 
be converted to data profiles that visualize the relative performance of 
optimization algorithms. 

The ``pyopus.simulator`` module currently supports SPICE OPUS, Ngspice, Xyce, 
HSPICE, and SPECTRE (supports OP, DC, TF, AC, TRAN, and NOISE analyses, as 
well as, collecting device properties like vdsat). The interface is simple 
and can be easily extended to support any simulator.

PyOPUS provides an extensible library of postprocessing functions which
enable you to easily extract performance measures like gain, bandwidth, rise
time, slew-rate, etc. from simulation results.
The collected performance measures can be further post-processed to obtain
a user-defined cost function which can be used for guiding the optimization
algorithms toward better circuits.

At a higher level of abstraction PyOPUS provides sensitivity analysis, 
parameter screening, worst case performance analysis, worst case distance 
analysis (deterministic approximation of parametric yield), and Monte Carlo 
analysis (statistical approximation of parametric yield). Designs can be 
sized efficiently across a large number of corners. PyOPUS fully automates 
the procedure for finding a circuit that exhibits the desired parametric yield. 
Most of these procedures can take advantage of parallel computing which 
significantly speeds up the process. 

Parallel computing is supported through the use of the MPI library. A 
cluster of computers is represented by a VirtualMachine object which
provides a simple interface to the underlying MPI library. Parallel programs 
can be written with the help of a simple cooperative multitasking OS. This 
OS can outsource function evaluations to computing nodes, but it can also 
perform all evaluations on a single processor. 
Writing parallel programs follows the UNIX philosophy. A function can be run 
remotely with the ``Spawn`` OS call. One or more remote functions can be 
waited on with the ``Join`` OS call. The OS is capable of running a parallel 
program on a single computing node using cooperative multitasking or on a set 
of multiple computing nodes using a VirtualMachine object. Parallelism can be 
introduced on multiple levels of the program (i.e. parallel performance 
evaluation across multiple corners, parallel optimization algorithms, solving 
multiple worst case performance problems in parallel, ...). 

PyOPUS provides a plotting mechanism based on MatPlotLib and PyQt5 with 
an interface and capabilities similar to those available in MATLAB.
The plots are handled by a separate thread so you can write your programs
just like in MATLAB. Professional quality plots can be 
easily exported to a large number of raster and vector formats for inclusion 
in your documents. The plotting capability is used in the ``pyopus.visual`` module 
that enables the programmer to visualize the simulation results after an 
optimization run or even during an optimization run. The GUI uses an embedded 
pyqtgraph widget for producing plots.  

PyOPUS is being developed by Árpád Bűrmen at the EDA Laboratory, University of Ljubljana, 
Slovenia with contributions from Janez Puhan, Jernej Olenšek, Gregor Cijan, and Iztok Fajfar. 
It is free software released under the [GNU Affero General Public License 3.0](LICENSE), 
unless otherwise noted in individual files. 


# Obtaining PyOPUS

You can obtain the official releases at [the PyOPUS homepage](https://fides.fe.uni-lj.si/pyopus). 
Source packages, documentation and demos, and prebuilt wheel files can be found 
there. For Windows users a PyOPUS quickstart package is available for the latest 
PyOPUS release. The package includes a Python installer, Microsoft MPI installer, 
all the dependencies in form of wheel files, and the PyOPUS wheel file. Read the 
[README](README) file to find out more. 

If you want to contribute to PyOPUS or just fiddle with the source code look at 
[the official PyOPUS git repository](https://codeberg.org/arpadbuermen/PyOPUS). 


# Building, installing, and setting it up

Instructions can be found in the [README](README) file. Also read it if you want 
to build a development version of PyOPUS from [the git repository](https://codeberg.org/arpadbuermen/PyOPUS). 

A tarball of the source code is available for each PyOPUS release. More 
adventurous users can clone [the git repository](https://codeberg.org/arpadbuermen/PyOPUS). 


# Documentation

The online version of the documentation for the latest PyOPUS release 
is available at [the PyOPUS homepage](https://fides.fe.uni-lj.si/pyopus). 
The documentation includes a reference manual (pretty much up to date) and tutorials
(can be a bit outdated, but still useful). 

Tarballs containing documentation in html format, as well as, demos are supplied 
with each PyOPUS release. 


# Want to start quickly with PyOPUS and see some results? 

Install it. Download and unpack the documentation and examples. Make sure one of the supported simulators is in the system PATH. Then run one of the GUI examples (one for each simulator) under 
`demo/gui/miller*` by typing
```
pyog miller.pog
```

A GUI window will open. Go to the Design Tasks tab, select the evaluate task and choose Task/Start locally from the menu. Then select Task/View results from the menu. This opens a tab with the task results. In the results tab choose Pass 1 verification and click on various items in the middle column of the results tab to inspect various aspects of the evaluated circuit.

Next, start a corner-based design run by selecting task corners in the Design Tasks tab, and starting the optimization run by choosing Task/Start locally. Open the results tab by choosing Task/View results and inspect the results as they are generated by the optimizer. Note that plots are available only for the pass verification result nodes. For all other result nodes only numerical results are available. Of course, you can change that. Just change the Save waveforms option under the tasks Output settings to For every saved result. Note that this way of running tasks can produce a lot of results and take a lot of space on your drive. Now stop the task and star it again. This time waveforms for all result nodes are available.

Fed up with slow optimization? Make sure task corners is selected in the Design Tasks tab and choose Task/Stop to stop the currently running task. Next, open the View/MPI hosts to open the hosts tab. Adjust the number of local processors you will be using. Then choose Task/Start on cluster in the menu. Go to the results tab and observe the results pouring in.

Note that at this point Ngspice has some startup issues so the parallel speedup is not so obvious. Best results are obtained when using SPICE OPUS or Xyce. 
