计算机视觉

某机器视觉系统的方案设计

1、 相机
1.1、相机空间分辨率
1.2、相机像素分辨率
1.3、相机快门速度
2、镜头
2.1、镜头像素分辨率
2.2、镜头空间分辨率
2.3、镜头焦距
2.4、其他因素
3、光源
3.1、类型
3.2、颜色
3.3、形状
3.4、方向
3.5、角度
3.6、光域
— — —

某机械加工企业的机器视觉系统的技术需求:
使用车床加工某款钢质圆棒,随着刀具消耗加大,后期加工零件尺寸越来越大,请问:可否使用机器视觉系统,随时检测工件直径,如果超差则随时报警。
工件最大直径16mm,要求精度2丝。

1、 相机

定性分析:
由于视觉系统的仅需处理工件的二维亮度信息,因此选择黑白像机。
成本考虑,应该选CCD形式的面阵。

1.1、确定相机空间分辨率Rs=Sf/Nf=0.02/2=0.01mm/pixel,即像素当量。其中Sf为视觉系统所需识别最小特征的尺寸0.02mm,Nf为算法识别该特征所需像素的数量2。

1.2、确定相机像素分辨率Rc=FOV/Rs=16/0.01=1600pixel。其中FOV为相机视场16mm,Rs为空间分辨率0.01mm/pixel,Rc为图像分辨率1600pixel。

1.3、确定相机快门速度Exp=Rs/v=0.01/0.05=0.2ms=200us,即曝光时间小于200微妙。其中Rs为像素分辨率0.01mm,v为最大运动速度0.05m/sec。(线速度v=pi*D*n/1000=3m/min=0.05m/sec,其中pi圆周率3.14,D是刀具和工件旋转直径16mm,n是主轴转速60m/min,结果v是m/min。)

最后,结合常用形式,可以选择1600*1200的的200万像素的1/2″的黑白CCD工业相机。

2、镜头

选择镜头需要注意的第一点就是镜头与相机是否匹配。原则上,镜头的规格必须等于或大于相机的规格。特别是在测量中,最好使用稍大规格的镜头,因为镜头往往在其边缘处失真最大。

2.1、确定镜头的像素分辨率。镜头分辨率又称鉴别率或解像力,是判断镜头好坏的一个重要指标,一般用成像平面上1毫米间距内能分辨开的黑白相间的线条对数表示,单位是线对/毫米lp/mm,line pair / mm。
* 根据相机靶面求镜头分辨率N=180/靶面高度4.8=38lp/mm,(1/2”相机的靶面是6.4mm*4.8mm)
* 根据相机像素确定镜头分辨率,1600/6.4=250Pixel/mm,即镜头像素密度是250 Pixel/mm,考虑黑白两条线需要除2,所以镜头分辨率不低于125 lp/mm。
综合考虑以上,确定镜头分辨率不得低于为125lp/mm

2.2、确定镜头的空间分辨率。相机的空间分辨率为 ( FOV/相机靶面像素数),是一个与镜头分辨率根本没关系的量,它们两者按Nyquist的采样理论联系起来才有关系,绝大多数视觉系统都要按FOV/CCD像素的比值来确定视觉系统的分辨率。
这里镜头的空间分辨率,至少不得低于相机的0.01mm/pixel。

2.3、确定镜头焦距
机构设计考虑镜头安装距离工件100mm,根据已选形相机为选1/2″靶面,工件视场FOV=16mm,工作距离WD=30 mm,则:
f=30X6.4/16=12mm
根据经济原则选用常见系列1/2″标准镜头,焦距确定焦距为定f=12mm。
(一般1/2″标准镜头焦距定12 mm,1/3″定8mm)

2.4、其他因素
额外的,可以顺便确定镜头的分辨力和数值孔径。
由线对是125lp/mm,则可以分辨间隔为1/125 = 8um的两条线,而这个8um是两条线的宽度,所以分辨力就是8/2 = 4 um,这个也就是埃利班的半径。
数值孔径NA=125/1500=0.083。

3、光源

为了有效屏蔽环境噪声在工件表面的成像,拟设计在测试工位设计保护腔,由光源系统提供均匀稳定光通量、同时屏蔽外界干扰。

3.1、类型
选用LED光源,这个无须赘述,卤素灯等已经过时了。

3.2、颜色
由于工件表面存在涂层,属强反色面,目标位于漫反射区,经分析应选择:前景、低角、明域、条形红光为主的组合结构光。

3.3、形状
组合条形、环形、同轴三种常见光源进行实验。目测所见,没有高光产生也没有形成阴影形成,接近人眼真实所见。利用直方图和频域工具工具分析,效果较好。

3.4、方向
根据照射方式,有前向照明和背向照明。前向照明,光源和像机位于工件同侧,利于拍摄表面特征图像,适用于物体的外观检测。背向照明,光源和像机在被测工件的两侧,易于获得高对比度(即清晰)的图像。
为有利于机械结构设计和现场安装调试,兼顾特征图像的获取,项目采用前向照明方式。

3.5、角度
根据光源与工件表面夹角,以45o为界,大者为高角度照明,小者为低角度照明。高角度照明常用于检测工件的丝印、商标、条码、字符等。低角度照明主要用于检测对象表面的凹凸部分,例如轮廓、边界、刻字、划伤等。
项目选择低角度方式。

3.6、光域
根据光源、像机、工件三者相对位置,有明域照明和暗域照明。明域方式把相机放置在光源反射的线路上;暗域方式把相机放置在与光源入射的方向上。
明域照明,主要用于散射和吸收光线的缺陷的检测,大多数是背景亮缺陷暗。暗域照明主要用于平滑工件表面的含有散射光的缺陷检测,大多数是背景黑暗而缺陷可见,常见于表面污垢和表面突起。根据经验翘边坑洞适用明域照明,而裂纹砂眼适用暗域照明。
选择明域方式。

Python+Qt+OpenCV界面(03)

PyQt 作为 Python 和 Qt 的bridge, 可以运行于所有平台。

1. 绑定

使用Python的getter/setter这种property, 可以把控件属性绑定到python变量。
例如,ui界面有某edit,python代码有某变量var,怎么实现:ui界面edit内容改变则变量var即更新、变量var改变则ui界面edit内容即更新?

两个方式:
* 第一个, 界面改变更新变量可以通过signal/slot实现,例如:
myQLine.textChanged.connect(self.myVAR)
但是反过来麻烦,因为myVAR只是个plain object,python没有内建的C#那样的实现方式。

* 使用Python getter/setter这个property, 从而self.myVAR成为 Property, 例如:
class Foo(object):
@property
def xx(self):
“””This method runs whenever you try to access self.xx”””
print(“Getting self.xx”)
return self._xx
@xx.setter
def xx(self, value):
“””This method runs whenever you try to set self.xx”””
print(“Setting self.xx to %s”%(value,))
self._xx = value
# Here add code to update the control in this setter method. …
# so whenever anything modifies the value of xx, the QLineEdit will be updated.

总体上:
@yourName.setter
def yourName(self, value):
self.myQLineEdit.setText(value)
self._name = value
# Note that the name data is actually being held in an attribute _name
# because it has to differ from the name of the getter/setter.

2. 信号连接

信号与槽机制是Qt最重要特性,提供了任意两个QT对象之间的通信机制。 信号会在某个特定情况或动作下被触发,槽是用于接收并处理信号的函数。

2.1 传统的signals/slot连接方式

信号与槽机制常用的连接方式为:
connect(Object1,SIGNAL(signal),Object2,SLOT(slot))
connect函数中的Object1和Object2,两个对象,signal是Object1对象的信号,注意要用SIGNAL宏包起来。当一个特定事件发生的时候(如点击按钮)或者Object1调用emit函数的时候,signal信号被发射。slot(槽)就是一个可以被调用处理特定信号的函数(或方法),是普通的对象成员函数。

2.2 PyQt4 以后可以采用新的信号与槽方式

After PyQt4, a new customized signals can be defined as class attributes using the pyqtSignal() factory:
PyQt4.QtCore.pyqtSignal(types[, name])
valueChanged = pyqtSignal([int], [‘QString’])

稍微完整的像这样:
from PyQt4 import QtCore
class MyQObject(QtCore.QObject):

# 定义一个无参数的信号
signal1 = QtCore.pyqtSignal()
# 定义一个参数的信号,参数类型为整数,参数名称为qtSignal2
signal2 = QtCore.pyqtSignal(int, name=’qtSignal2′)

def connectSigSlot(self):
signal1.connect(self.myReceiver1)
signal2.connect(self.myReceiver2)

def myReceiver1(self):
print ‘myReceiver1 called’
def myReceiver2(self, arg):
print ‘myReceiver2 called with argument value %d’ % arg

def myEmitter(self, arg):
signal1.emit()
signal2.emit(10)

新的singal/slot的定义与使用方式是PyQT 4.5中的一大改革。可以让PyQT程序更清楚易读。PyQT 4.5以后的版本建议用这种新方式。

3. Pyqt5的

PyQt5动定义了很多QT内建信号, 但是为了灵活使用信号与槽机制,可以自定义signal.
这样通过 pyqtSignal()方法定义新的信号,新的信号作为类的属性。

3.1 信号定义
新的信号应该定义在QObject的子类中。新的信号必须作为定义类的一部分,不允许将信号作为类的属性在类定义之后通过动态的方式进行添加。通过这种方式新的信号才能自动的添加到QMetaObject类中。这就意味这新定义的信号将会出现在Qt-Designer,并且可以通过QMetaObject API实现内省。

例如:
# 定义一个“closed”信号,该信号没有参数
closed= pyqtSignal()
# 定义一个”range_changed”信号,该信号有两个int类型的参数
range_changed = pyqtSignal(int, int, name=’rangeChanged’)

helpSignal = pyqtSignal(str) # helpSignal 为str参数类型的信号
printSignal = pyqtSignal(list) # printSignal 为list参数类型的信号

# 声明一个多重载版本的信号,包括: 一个带int和str类型参数的信号 以及 带str参数的信号
previewSignal = pyqtSignal([int,str],[str])

3.2 信号和槽绑定

self.helpSignal.connect(self.showHelpMessage)
self.printSignal.connect(self.printPaper)

# 存在两个版本,从因此在绑定的时候需要显式的指定信号和槽的绑定关系。
self.previewSignal[str].connect(self.previewPaper)
self.previewSignal[int,str].connect(self.previewPaperWithArgs)

3.3 信号发射
自定义信号的发射,通过emit()方法类实现:

self.printButton.clicked.connect(self.emitPrintSignal)
self.previewButton.clicked.connect(self.emitPreviewSignal)

def emitPrintSignal(self):
pList = []
pList.append(self.numberSpinBox.value ())
pList.append(self.styleCombo.currentText())
self.printSignal.emit(pList)

def emitPreviewSignal(self):
if self.previewStatus.isChecked() == True:
self.previewSignal[int,str].emit(1080,” Full Screen”)
elif self.previewStatus.isChecked() == False:
self.previewSignal[str].emit(“Preview”)

3.4 槽函数实现
通过@PyQt4.QtCore.pyqtSlot装饰方法定义槽函数:
@PyQt4.QtCore.pyqtSlot()
def setValue_NoParameters(self):
”’无参数槽方法”’
pass
@PyQt4.QtCore.pyqtSlot(int)
def setValue_OneParameter(self,nIndex):
”’一个参数(整数)槽方法”’
pass
… …
Pyqt5:
def printPaper(self,list):
self.resultLabel.setText(“Print: “+”份数:”+ str(list[0]) +” 纸张:”+str(list[1]))

def previewPaperWithArgs(self,style,text):
self.resultLabel.setText(str(style)+text)

def previewPaper(self,text):
self.resultLabel.setText(text)

3.5 总结
自定义信号的一般流程如下:
定义信号 – 绑定信号和槽 – 定义槽函数 – 发射信号, 例如:
from PyQt5.QtCore import QObject, pyqtSignal
class NewSignal(QObject):

# 一个valueChanged的信号,该信号没有参数.
valueChanged = pyqtSignal()

def connect_and_emit_valueChanged(self):
# 绑定信号和槽函数
self.valueChanged.connect(self.handle_valueChanged)
# 发射信号.
self.trigger.emit()

def handle_valueChanged(self):
print(“trigger signal received”)

注意: signal和slot的调用逻辑,避免signal和slot出现死循环, 如在slot方法中继续发射该信号.

4. app
The pyqtSlot() decorator can be used to specify which of the signals should be connected to the slot.
For example if you were only interested in the integer variant of the signal,
then your slot definition would look like the following:

@pyqtSlot(int)
def on_spinbox_valueChanged(self, i):
# i will be an integer.
pass

If wanted to handle both variants of the signal, but with different Python methods,
then slot definitions might look like following:
@pyqtSlot(int, name=’on_spinbox_valueChanged’) # note int,
def spinbox_INT_value(self, i):
# i will be an integer.
pass
@pyqtSlot(str, name=’on_spinbox_valueChanged’) # note str,
def spinbox_QSTRING_value(self, s):
# s will be a Python string object (or a QString if they are enabled)
pass

The following shows an example using a button when you are not interested in the optional argument:
@pyqtSlot()
def on_button_clicked(self):
pass

5. 装饰符 decorator

pySlot这个装饰符号可以把一个method定义为slot, 例如:
@QtCore.pyqtSlot()
def mySlot1(self):
print ‘mySlot received a signal’)
@QtCore.pyqtSlot(int)
def mySlot2(self, arg):
print ‘mySlot2 received a signal with argument %d’ % arg)

整个slot的定义与旧的方法相较,顿时变得简单许多。

而且,如果,UI是通过pyuic4设计的,那么甚至可以通过slot的名称来指定要连接的控件和signal。
例如, UI中有一个名为myBtn的按钮,想要连接单击的clicked signal。那么只要使用装饰符定义如下slot:
@QtCore.pyqtSlot(bool)
def on_myBtn_clicked(self, checked):
print ‘myBtn clicked.’

PyQT会自动将这个slot与UI内的myBtn的clicked singal连接起来。非常省事。
REF: http://python.jobbole.com/81683/

6. pyqt bridged opencv

先要把 cv.iplimage 这个 OpenCV 图像 用Python 的 PyQt widget显示。

6.1 Image Class

从 QtGui.QImage 派生出我们CameraImage类:

$ vi myCvPyQt.py
import cv
from PyQt4 import QtGui

# image used to paint by QT frameworks,
# converted from opencv image format
class CameraImage(QtGui.QImage):

def __init__(self, opencvBgrImg):
depth, nChannels = opencvBgrImg.depth, opencvBgrImg.nChannels
if depth != cv.IPL_DEPTH_8U or nChannels != 3:
raise ValueError(“image must be 8-bit 3-channel”)
w, h = cv.GetSize(opencvBgrImg)

opencvRgbImg = cv.CreateImage((w, h), depth, nChannels)

# OpenCV images from files or camera is BGR format
# which not what PyQt wanted, so convert to RGB.
cv.CvtColor(opencvBgrImg, opencvRgbImg, cv.CV_BGR2RGB)

# NOTE: save a reference to tmp opencvRgbImg byte-content
# prevent the garbage collector delete it when __init__ returns.

self._imgData = opencvRgbImg.tostring()

# call super UI QtGui.QImage base class constructor to build img
# by byte-content, dimensions, format of image.
super(CameraImage, self).__init__(self._imgData, w, h, QtGui.QImage.Format_RGB888)

If all you want is to show an OpenCV image in a PyQt widget, that’s all you need.

6.2 CameraDevice Class

新建一个相机的类,方便后期操作,实现相机和控件的解偶。
也就是,惟一的一个相机,可以作为生产者,供许多widgets消费,而控件之间无干扰。
也就是,人一个widgets都已可完全的操作相机。

$ vi myCvPyQt.py
import cv
from PyQt4 import QtCore

class CameraDevice(QtCore.QObject):

_DEFAULT_FPS = 30
newFrame = QtCore.pyqtSignal(cv.iplimage) # define signal with args of Camera Device

def __init__(self, cameraId=0, mirrored=False, parent=None):
super(CameraDevice, self).__init__(parent)
self.mirrored = mirrored

self._cameraDevice = cv.CaptureFromCAM(cameraId) # get capturer

self._timer = QtCore.QTimer(self)
self._timer.timeout.connect(self._queryFrame)
self._timer.setInterval(1000/self.fps)

self.paused = False

@QtCore.pyqtSlot()
def _queryFrame(self):
frame = cv.QueryFrame(self._cameraDevice) # get frame
if self.mirrored:
mirroredFrame = cv.CreateImage(cv.GetSize(frame), frame.depth, frame.nChannels)
cv.Flip(frame, mirroredFrame, 1)
frame = mirroredFrame
self.newFrame.emit(frame) # trigger signal with args of Camera Device

@property
def paused(self):
return not self._timer.isActive()

@paused.setter
def paused(self, p):
if p:
self._timer.stop()
else:
self._timer.start()

@property
def frameSize(self):
w = cv.GetCaptureProperty(self._cameraDevice, cv.CV_CAP_PROP_FRAME_WIDTH)
h = cv.GetCaptureProperty(self._cameraDevice, cv.CV_CAP_PROP_FRAME_HEIGHT)
return int(w), int(h)

@property
def fps(self):
fps = int(cv.GetCaptureProperty(self._cameraDevice, cv.CV_CAP_PROP_FPS))
if not fps > 0:
fps = self._DEFAULT_FPS
return fps

Essentially, it uses a timer to query the camera for a new frame and emits a signal passing the captured frame as parameter.
The timer is important to avoid spending CPU time with unnecessary pooling.

6.3 CameraWidget Class

The main purpose of it is to draw the frames delivered by the camera device.
But, before drawing a frame, it must allow anyone interested to process it, changing it without interfering with any other camera widget.

$ vi myCvPyQt.py
import cv
from PyQt4 import QtCore
from PyQt4 import QtGui

class CameraWidget(QtGui.QWidget):

newFrame = QtCore.pyqtSignal(cv.iplimage) # a signal of Camera Widget

def __init__(self, cameraDevice, parent=None):
super(CameraWidget, self).__init__(parent)

self._frame = None

self._cameraDevice = cameraDevice # passing into camera device
self._cameraDevice.newFrame.connect(self._onDeviceNewFrame) # connect with signal of Camera Device

w, h = self._cameraDevice.frameSize
self.setMinimumSize(w, h)
self.setMaximumSize(w, h)

@QtCore.pyqtSlot(cv.iplimage)
def _onDeviceNewFrame(self, frame):
self._frame = cv.CloneImage(frame) # make local copy
self.newFrame.emit(self._frame) # trigger signal with args of Camera Widget, processed by…
self.update() # if device update then update

def changeEvent(self, e):
if e.type() == QtCore.QEvent.EnabledChange:
if self.isEnabled():
self._cameraDevice.newFrame.connect(self._onDeviceNewFrame)
else:
self._cameraDevice.newFrame.disconnect(self._onDeviceNewFrame)

def paintEvent(self, e):
if self._frame is None:
return
painter = QtGui.QPainter(self)
painter.drawImage(QtCore.QPoint(0, 0), CameraImage(self._frame))
# paint with frame whether already processed or not

every widget saves its own version of the frame by cv.CloneImage(frame). This way, they can do whatever they want safely.

However, to process the frame is not responsibility of the widget. Thus, it emits a signal with the saved frame as parameter by emit(self._frame) and anyone connected to it can do the hard work. It happens usually inside main block code.

6.4 Application

$ vi myCvPyQt.py
import sys

def _main_():

@QtCore.pyqtSlot(cv.iplimage)
def onWidgetNewFrame(frame):
cv.CvtColor(frame, frame, cv.CV_RGB2BGR)
msg = “… processing …”
font = cv.InitFont(cv.CV_FONT_HERSHEY_DUPLEX, 1.0, 1.0)
tsize, baseline = cv.GetTextSize(msg, font)
w, h = cv.GetSize(frame)
tpt = (w – tsize[0]) / 2, (h – tsize[1]) / 2
cv.PutText(frame, msg, tpt, font, cv.RGB(255, 0, 0))

app = QtGui.QApplication(sys.argv)

cameraDevice = CameraDevice(mirrored=True) # only one camera device

cameraWidget1 = CameraWidget(cameraDevice) # 1th widget
cameraWidget1.setWindowTitle(‘Orig Img’)
cameraWidget1.show()

cameraWidget2 = CameraWidget(cameraDevice) # 2st widget
cameraWidget2.newFrame.connect(onWidgetNewFrame) # connect signal with args of Camera Widget
cameraWidget2.setWindowTitle(‘Processed img’)
cameraWidget2.show()

sys.exit(app.exec_())

if __name__ == ‘__main__’:
main()

Two CameraWidget objects share the same CameraDevice, only the first widget processes the frames .
The result is two widgets showing different images resulting from the same frame.
Now you can import CameraWidget in a PyQt application to have fresh camera preview.

6.5 Total like this:
import cv
from PyQt4 import QtCore
from PyQt4 import QtGui
import sys

class CameraImage(QtGui.QImage):
… …
class CameraDevice(QtCore.QObject):
… …
class CameraWidget(QtGui.QWidget):
… …
def main():
… …

OK

BUT, better do the video capturing and conversion in another thread firstly, and then send a signal to the GUI instead of using timers.
The advantage of this is that every time a frame is captured the thread will shoot a signal to the main (which handles the UI) to update the component, which displays the OpenCV image (in opencv2 this is the Mat object).
https://gist.github.com/saghul/1055161

Python+Qt界面设计(01)

PART – I 界面
1. 准备
2. Qt的C++界面设计
3. Qt的Python界面设计

PART – II 剥离界面代码和功能代码
1. 建立新类
2. 操作界面

PART – III 剥离主线程和任务线程
1. QThread线程
2. 线程通讯
3. 一个例子

PART – I 界面

Ubuntu系统比Windows稳定小巧免费。
OpenCV视觉项目开源免费。
Python的语言比C/Cpp效率高。
Qt界面使用简单。
SO: ubuntu+python+opencv+qt ..

整体流程是:
先用qtcreator设计界面,然后pyc转化为python界面代码,然后再单独在工作线程里应用OpenCV。

1. 准备

在Ubuntu 14.04默认给安装了Python2.7和3.4,Qt默认给安装了4.8和5.2,sip默认安装时是4.15,PyQt默认安装的是pyqt4。
qtcreator有多个版本确认使用的哪一个:
$ which qtcreator
—/usr/bin/qtcreator

确认当前使用版本:
$ qmake -version
—QMake version 3.0
—Using Qt version 5.2.1 in /usr/lib/x86_64-linux-gnu

$ qtchooser -list-versions
—4
—5
—default
—qt4-x86_64-linux-gnu
—qt4
—qt5-x86_64-linux-gnu
—qt5

$ python -V
—Python 2.7.6
$ python3 -V
—Python 3.4.3

$ sip -V
—4.15.5
—python
Python 2.7.6 (default, Jun 22 2015, 17:58:13)
>>>import sip
>>>import PyQt4
导入正常.

2. Qt的C++界面设计

Qt的C++应该是最常用的。
$ qt-creator
File–New File or project… select QT Widget Application … Choose location
例如在界面添加一个pushbutton类型的pushButton01。
然后右击pushButton01 为 clicked这个single添加一个slot, 回掉函数名字例如 CbOnClicked(),
注意!
如果是python,就不要在creator这IDE里面添加slot的回调,因为具体都在py代码中实现。
Ctrl-B to build,
Ctrl-r ro run.

总结:
QTwidgets-based-project一共4个文件:
入口文件main.cpp +mainwindow.ui文件 + mainwindow.h和mainwindow.cpp后台文件
在main.cpp -> main函数中 直接调用MainWindow类的show()方法显示主界面。
MainWindow类中有成员变量是ui,其类型是Ui::MainWindow,通过这个ui成员去访问操作界面控件

3. Qt的Python界面设计

总体是:
先用qtcreator设计界面,
然后用pyc转化为python代码。

qtcreater … File–New File or project … Applications-Qt Widgets Application … Choose … Select location
注意!
kits 选择 Qt4.8.6或5.2.1某个 … Finish

例如, 在界面添加pushbutton类型的按钮名字为pushButton01。
注意! 这里不需要在qtcreator的designer中添加slot的回调,因为具体都是在py代码中。
保存 project, ok。

界面转换成Python代码:
$ pyuic4 -x ./mainwindow.ui -o ./myGUI.py

$ nano myGUI.py
… …
self.menuBar = QtGui.QMenuBar(MainWindow)
self.menuBar.setGeometry(QtCore.QRect(0, 0, 400, 25))
self.menuBar.setObjectName(_fromUtf8(“menuBar”))
MainWindow.setMenuBar(self.menuBar)
… …
可见这个py文件储存的是ui界面信息。

以下, 是不建议的操作界面的方式。

* 在myGUI.py中直接添加回调函数:
QtCore.QObject.connect(self.pushButton01, QtCore.SIGNAL(_fromUtf8(“clicked()”)), self.CbOnClicked)
* 在myGUI.py中直接实现回调函数:
def CbOnClicked(self):
print “Hello…dehao!…”
* 整体上像这样:
$ vi myGUI.py
from PyQt4 import QtCore, QtGui
… …
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName(_fromUtf8(“MainWindow”))
MainWindow.resize(400, 300)
MainWindow.setStatusBar(self.statusBar)
… …
self.retranslateUi(MainWindow)
QtCore.QObject.connect(self.pushButton, QtCore.SIGNAL(_fromUtf8(“clicked()”)), self.CbOnClicked)
## … we conect your callback function here …
QtCore.QMetaObject.connectSlotsByName(MainWindow)

def retranslateUi(self, MainWindow):
MainWindow.setWindowTitle(_translate(“MainWindow”, “MainWindow”, None))
self.pushButton.setText(_translate(“MainWindow”, “PushOK”, None))

## … now realize your callback function here …
def CbOnClicked(self):
print “Hello…dehao!…”

if __name__ == “__main__”:
import sys
app = QtGui.QApplication(sys.argv)
MainWindow = QtGui.QMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec_())

测试:
$ python myGUI.py
这样,窗口弹出,button响应。

PART – II 剥离界面代码和功能代码

前述均是直接修改pyc导出的python界面代码文件。
但是一旦在qtcreator设计环境中修改了ui界面, 就要重新把ui界面文件重新pyc导出,
那么这个myGUI.py文件就被完全覆盖,修改的(功能)代码全部丢失。

所以, 不可以在python界面代码文件中写功能代码。
为此, 新建某个文件(myGUI.py同目录),例如main.py, 再文件中建立新类class, 例如myApp:

1. 建立新类

$ vi main.py
from PyQt4 import QtGui
import sys

# we Make one new class myApp that will combine with ui code
# so that we can use all of its features interact with GUI elements
import myGUI

# 子类调用一个自身没定义的属性的多重继承按照何种顺序到父类寻找? 尤其众多父类中有多个都包含该同名属性?
# 经典类采用, 从左到右深度优先原则匹配方法 ,新式类, 用C3算法(不同于广度优先)进行匹配
#继承方法搜索的路径是先从左到右,在选定一个Base之后一直沿着该Base的继承进行搜索直至最顶端然后再到另外一个Base。
# bujin完成了所有的父类的调用,而且保证了每一个父类的初始化函数只调用一次。
class myApp(QtGui.QMainWindow, myGUI.Ui_MainWindow):
def __init__(self):
# super(B, self)首先找到B的父类(类A),然后把类B的对象self转换为类A的对象
# 然后被转换的类A对象调用自己的__init__函数
#super机制可以保证公共父类仅被执行一次,执行顺序按照MRO:Method Resolution Order
# 混用super类和非绑定函数是危险行为
# Simple reason why we use it here is that it allows us to access variables and methods etc in myGUI.py,
# here the self.__class__ is B in above example code.
super(self.__class__, self).__init__()

# This is defined in myGUI.py file automatically. It sets up layout and widgets that are defined
self.setupUi(self)

def main():
app = QtGui.QApplication(sys.argv) # New instance of QApplication, same as myGUI.py

form = myApp() # We set the form to be our myApp (design)
form.show() # Show the form

app.exec_() # myGUI.py is sys.exit(app.exec_())

if __name__ == ‘__main__’: # if we’re running file directly and not importing it
main() # run the main function

以上即可操作界面,但是并没有响应界面元素。

2. 操作界面

例如,为pushButton01的clicked这个event建立一个connect:
self.pushButton01.clicked.connect(self.my_func01)
And add it to the __ini__ method of our myApp class so that it’s set up when the application starts.

实现事件的回调函数 my_func01 function:
def my_func01(self):
print “Hello…dehao…”

整体像这样:
$ vi main.py
from PyQt4 import QtGui
import sys
import myGUI

class myApp(QtGui.QMainWindow, myGUI.Ui_MainWindow):
def __init__(self, parent=None):
super(myApp, self).__init__(parent)
self.setupUi(self)
self.pushButton01.clicked.connect(self.my_func01)

def my_func01(self):
print “Hello…dehao…”

def main():
app = QtGui.QApplication(sys.argv)
form = myApp()
form.show()
app.exec_()

if __name__ == ‘__main__’:
main()

测试:
$ python main.py
… …

PART – III 剥离主线程和任务线程

以上,通过合理的组织,实现界面代码和功能代码的解耦。

但是:
全部功能均是在界面主线程进行,这对于某些耗时任务并不合适。

界面不更新、 不响应等界面冻结的体验恶劣,所以要把这类任务放在单独的线程。
通常, 界面处理所在线程为主线程, 执行具体工作的为任务线程。

1. QThread线程

1.1 建立一个工作线程像这样 :
from PyQt4.QtCore import QThread
class YourThreadName(QThread):
def __init__(self):
QThread.__init__(self)
def __del__(self):
self.wait()
def run(self):
# your logic here
注意不要直接用run这个method, 尽量通过start。

1.2 使用一个工作线程像这样:
self.myThread = YourThreadName()
self.myThread.start()

可以用类似 quit, start, terminate, isFinished, isRunning等method ,
QThread提供了finished, started, terminated等有用的signal。

2. 线程通讯

在后台背景运行的任务线程,需要把数据传给界面主线程,完成更新进度条之类。
The proper way to do communication between working threads and UI thread is using signals。

2.1 built-in signals

例如任务线程的暴力破解工作完成后, 界面可以得到消息并提示用户。

*-实现函数, 在界面主线程里实现响应函数:
def done(self):
QtGui.QMessageBox.information(self, “Done!”, “crack finish!”)

*-连接函数, 在界面主线程里连接信号和函数:
self.myThread = thread01(test)
self.connect(self.myThread, SIGNAL(“finished()”), self.done)
self.myThread.start()

总体上:
first make a new instance,
then connect the signal with function,
then start the thread.

查看所有可能的Qt Signals:
http://pyqt.sourceforge.net/Docs/PyQt4/qthread.html

2.2 custom signals

定制信号和内嵌信号的唯一区别,是要在“QThead类”里面定义信号,
至于界面主线程里面实现响应函数以及连接信号和响应函数的都是相同的。

*-定义信号, 在“QThead类”里面定义信号,有所种方法,例如这种:
self.emit(SIGNAL(‘myTask(QString)’), myParam)

*-连接函数, 在界面主线程里面捕捉信号,是和内嵌信号的处理相同:
self.connect(self.myThread, SIGNAL(“myTask(QString)”), self.myFunc00200)

NOTE! 但是要注意这里有个重要的定制信号和内嵌信号的不同点,就是这个信号会传递一个回调函数所需要的对象。
This signal will actually pass an object (in this case QString) to the myFunc00200 function, and we need to catch that.
If you do decide to pass something the function that will be connected to the signal must be able to accept that argument.

*-实现函数, 既然信号传递的是QString,那么myFunc00200的实现像这样:
def myFunc00200(self, text):
self.my_ui_list_controls.addItem(text)
self.my_ui_progress_bar.setValue(self.my_ui_progress_bar.value()+1)

界面主线程获得的text, 就是从任务线程传来的QString。

3. 一个例子
用户在界面输入字符串,经过耗时的暴力破解处理后,结果显示给界面,用户可以暂停,破解同时可以更新进度和结果。
为此布置界面元素如下:
输入: btn_Start,开始破解。 btn_Stop,取消破解。 edit_Control, 输入框。
显示: progress_Control,进度条。 list_Control, 列表框 破解结果。
注意列表选择 list_widget 不要用 list_view.

代码像这样:
$ vi main2.py

from PyQt4 import QtGui
from PyQt4.QtCore import QThread, SIGNAL
import sys
import myGUI2
import time

## … Threads code …
class thread01(QThread):

def __init__(self, text):
# Make a new thread instance with one text as the first argument.
# The text argument will be stored in an instance variable called text
# which then can be accessed by all other class instance functions
QThread.__init__(self)
self.text = text

def __del__(self):
self.wait()

def _get_upper_char(self, ch):
# simulate crack task by to up case
return ch.upper()

def run(self):
# simulate a time consuming by sleep
for ch in self.text:
c = self._get_upper_char(ch)
self.emit(SIGNAL(‘processing_char(QString)’), c)
self.sleep(2)

## … main code …
class myApp(QtGui.QMainWindow, myGUI2.Ui_MainWindow):

def __init__(self):
super(self.__class__, self).__init__()
self.setupUi(self)
self.btn_Start.clicked.connect(self.start_process)

def start_process(self):
text = str(self.edit_Control.text()).strip()
self.progress_Control.setMaximum( len( text ) )
self.progress_Control.setValue(0)

self.mythread = thread01(text)
self.connect(self.mythread, SIGNAL(“processing_char(QString)”), self.update_ui) #
self.connect(self.mythread, SIGNAL(“finished()”), self.done) #
self.mythread.start()

self.btn_Stop.setEnabled(True)
self.btn_Stop.clicked.connect(self.mythread.terminate)
self.btn_Start.setEnabled(False)

def update_char(self, text):
self.list_Control.addItem(text)
self.progress_Control.setValue(self.progress_Control.value()+1)

def done(self):
self.btn_Stop.setEnabled(False)
self.btn_Start.setEnabled(True)
self.progress_Control.setValue(0)
QtGui.QMessageBox.information(self, “Done!”, “process finished!”)

def main():
app = QtGui.QApplication(sys.argv)
form = myApp()
form.show()
app.exec_()

if __name__ == ‘__main__’:
main()

okay.

$ pyuic4 -x ./mainwindow.ui -o ./myGUI2.py
$ python main2.py

OpenCV的Ubuntu14配置

You can install OpenCV from the Ubuntu or Debian repository,
INSTALL OPENCV FROM THE UBUNTU OR DEBIAN REPOSITORY
$ sudo apt-get install libopencv-dev python-opencv
However, you will probably not have installed the latest version of OpenCV and you may miss some features (for example: Python 3 bindings do not exist in the repository).

SO, BETTER IS INSTALL OPENCV FROM THE OFFICIAL SITE,
To install the latest version of OpenCV be sure that you have removed the library from the repository .

There are 2 methods of removing your old installation of OpenCV, and they depend on how have you had OpenCV installed in first place!

1- If you have installed from Ubuntu’s repository (or package managers like apt or the package manager of any other distros):
In this case, it is as simple as removing OpenCV’s package using your package manager. For example, on Ubuntu based Linux systems you can write the following commands in your favorite terminal. P
$ sudo apt-get autoremove libopencv-dev python-opencv

2-If you have installed from source (using make/make install):
In this case, the make command should have created an uninstall profile for you. So to remove OpenCV, go to the folder that you have compiled OpenCV (the place you had called make/make install) and execute the following command:
$ sudo make uninstall

3- NOTE: If you do not remember how have you installed OpenCV, or none of the above method works for you, you can use the following command to delete any file that has something to do with OpenCV. Please note that removing files can be dangerous so do this on your own risk! I take no responsibility!
$ sudo find / -name “*opencv*” -exec rm -i {} ;
$ sudo ldconfig && sudo ldconfig -vp

4- and follow the steps below.

PART – I PREPARE

1. make sure that everything in the system is updated and upgraded
$ sudo apt-get update
# $ sudo apt-get upgrade

2. install dependencies
依赖包安装

2.1 Build tools
$ sudo apt-get install build-essential
主要为build-essential软件包.
为编译程序提供必要的软件包的列别信息,这样软件包才知道头文件、库函数的位置。
此外,它还会下载依赖的软件包,安装gcc/g++/gdb/make等基本编程工具,最后组成一个开发环境。

$ sudo apt-get install cmake
安装cmake用于编译源码

2.2 GUI & OpenGLs
# $ sudo apt-get install qt5-default libvtk6-dev
# sudo apt-get install qt4-default libqt4-opengl-dev libvtk5-qt4-dev libgtk2.0-dev libgtkglext1 libgtkglext1-dev -y

2.3 Media I/O
安装能够支持图像读写以及视频读写的相关依赖包
$ sudo apt-get install libgtk2.0-dev
$ sudo apt-get install libjpeg-dev libpng-dev libtiff-dev libjasper-dev

2.4 Video I/O
$ sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev
# sudo apt-get install libavformat-dev libavutil-ffmpeg54 libavutil-dev libxine2-dev libxine2 libswscale-dev libswscale-ffmpeg3 libdc1394-22 libdc1394-22-dev libdc1394-utils -y

2.4 codec
#sudo apt-get install libavcodec-dev -y
#sudo apt-get install libfaac-dev libmp3lame-dev -y
#sudo apt-get install libopencore-amrnb-dev libopencore-amrwb-dev -y
#sudo apt-get install libtheora-dev libvorbis-dev libxvidcore-dev -y
#sudo apt-get install ffmpeg x264 libx264-dev -y
#sudo apt-get install libv4l-0 libv4l v4l-utils -y

2.5 Python
$ sudo apt-get install python-dev python-numpy
# $ sudo apt-get install python-dev python-tk python-numpy python3-dev python3-tk python3-numpy

2.5 Parallelism and linear algebra libraries, multiproccessing library
$ sudo apt-get install libtbb-dev
# $ sudo apt-get install libtbb2
# libeigen3-dev

2.6 安装pkg-config
一个统一接口计算机软件,用于从源码中编译软件时查询已安装的库
$ sudo apt-get install pkg-config

2.6
$ sudo apt-get install libdc1394-22-dev
2.7
$ sudo apt-get install libopencv-dev
# $ sudo apt-get install checkinstall yasm libqt4-dev libqt4-opengl-dev

PART – II make & install

1. doanload and unzip
下载并解压OpenCV
$ mkdir ~/opencv-src
$ cd ~/opencv-src
$ wget http://downloads.sourceforge.net/project/opencvlibrary/opencv-unix/2.4.11/opencv-2.4.11.zip
$ unzip opencv-2.4.11.zip

2. cmake
generate the Makefile by using cmake, we can define which parts of OpenCV we want to compile,
For example, we want to use:
the viz module, Python, Java, TBB, OpenGL, Qt, work with videos, etc.

2.1 切换到解压后的OpenCV路径
$ cd opencv-2.4.11
$ cmake .
NOTE 后面的.表示找CMakeLists.txt文件

or, ABOVE is not very good, so:

2.2在另外一个文件夹,常为其子文件夹,构建makefile,同时进行一些参数配置
execute the following line at the terminal to create the appropriate Makefile:
$ cd ~/opencv-src/opencv-2.4.11
$ mkdir release
$ cd release
$ cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D WITH_TBB=ON -D WITH_V4L=ON -D WITH_QT=ON -D WITH_OPENGL=ON -D WITH_VTK=ON -D BUILD_NEW_PYTHON_SUPPORT=ON -D INSTALL_C_EXAMPLES=ON -D INSTALL_PYTHON_EXAMPLES=ON -D OPENCV_EXTRA_MODULES_PATH=/home/dehaou1404/opencv-src/opencv-2.4.11/modules ..

-D CMAKE_INSTALL_PREFIX=/usr/local
-D CMAKE_BUILD_TYPE=RELEASE
-D BUILD_NEW_PYTHON_SUPPORT=ON
-D INSTALL_C_EXAMPLES=ON
-D INSTALL_PYTHON_EXAMPLES=ON
-D BUILD_TIFF=ON
-D WITH_TBB=ON
-D WITH_V4L=ON
-D WITH_QT=ON
-D WITH_OPENGL=ON
-D WITH_VTK=ON
-D OPENCV_EXTRA_MODULES_PATH=/home/dehaou1404/opencv-src/opencv-2.4.11/modules
# -D WITH_IMAGEIO=ON
# -D BUILD_TIFF=ON
# -D BUILD_EXAMPLES=ON
-D OPENCV_EXTRA_MODULES_PATH=/home/dehaou1404/opencv-src/opencv-2.4.11/modules

After I set WITH_V4L = OFF but still keep WITH_LIBV4L=ON, the configuring skipped the searching of ‘sys/videoio.h’, and the compilation worke$

during configuration, remove the packages that use ‘sys/videoio.h’; this just evades the issue, but it works well; hopefully it can be solved i$

NOTE1: Use cmake -DCMAKE_BUILD_TYPE=RELEASE -DCMAKE_INSTALL_PREFIX=/usr/local .. , without spaces after -D if step 2 do not work.

NOTE2: There are two dots at the end of the line, it is an argument for the cmake program and it means the parent directory (because we are inside the build directory, and we want to refer to the OpenCV directory, which is its parent).
“CMAKE_INSTALL_PREFIX=/usr/local”路径可以自定义修改。

IF errors occure, to see CMakeFiles/CMakeError.log file.

For me the one is:
fatal error: linux/videodev.h: No such file or directory
SOLUTION:
$ sudo apt-get install libv4l-dev
Installing libv4l-dev creates a /usr/include/linux/videodev2.h , but want to find linux/videodev.h.
The library does ship header files for compatibility, but fails to put them where applications will look for them.
$ cd /usr/include/linux
$ sudo ln -s ../libv4l1-videodev.h videodev.h
This provides a linux/videodev.h, and of the right version (1).

The another error :
fatal error: sys/videoio.h: No such file or directory

fatal error: ffmpeg/avformat.h: No such file or directory
#include
^
compilation terminated.
SO:

FInally,
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D WITH_TBB=ON -D WITH_QT=ON -D WITH_OPENGL=ON -D WITH_VTK=ON -D BUILD_NEW_PYTHON_SUPPORT=ON -D INSTALL_PYTHON_EXAMPLES=ON ..
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D WITH_TBB=ON -D WITH_QT=ON -D WITH_OPENGL=ON -D WITH_VTK=ON -D BUILD_NEW_PYTHON_SUPPORT=ON -D INSTALL_C_EXAMPLES=ON ..
-D INSTALL_C_EXAMPLES=ON ..

-D WITH_V4L=ON

-D INSTALL_PYTHON_EXAMPLES=ON <---- result in errors. HINTS: if once cmake error. need to remake folder as release then by the process again. this is ended: cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D WITH_TBB=ON -D WITH_QT=ON -D WITH_OPENGL=ON -D WITH_VTK=ON -D WITH_V4L=ON -D BUILD_NEW_PYTHON_SUPPORT=ON -D INSTALL_C_EXAMPLES=ON .. 2.3 Check Check that the above command produces no error and that in particular it reports FFMPEG as YES. If this is not the case you will not be able to read or write videos. Check that Java, Python, TBB, OpenGL, V4L, OpenGL and Qt are all detected correctly. If anything is wrong, go back, correct the errors by maybe installing extra packages and then run cmake again. 3. 编译 $ cat /proc/cpuinfo | grep processor -- 4 核 $ make -j4 4. 安装 $ sudo make install PART - III CONFIGURE 动态链接库和头文件配置 1. 配置相关信息是OpenCV动态库被共享 在/ect/ld.so.conf.d目录添加opencv.conf文件 $ sudo gedit /etc/ld.so.conf.d/opencv.conf 文件内容: # opencv.conf /usr/local/lib 使用动态库管理命令ldconfig,使opencv的相关链接库文件被系统共享: $ sudo ldconfig -v 2. 添加OpenCV的头文件位置 $ sudo vi /etc/bash.bashrc Add these two lines at the end of the file and save it: PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib/pkgconfig export PKG_CONFIG_PATH pkg-config维护opencv的相关配置文件,可以在/usr/local/lib/pkgconfig目录下看到opencv.pc文件,此文件主要记录opencv的动态库信息和头文件信息。 使用pkg-config命令列出opencv的配置信息: 换路径: $ cd /urs/local/lib/pkgconfig 命令: $ pkg-config --libs opencv 查看opencv相关配置信息 注意:在更改相关文件时,可能文件的权限首先,故需现更改相关的权限 3. OK Now you have OpenCV 2.4.9 installed in your computer with 3D visualization, Python, Java, TBB, OpenGL, video, and Qt support. PART - IV TEST OpenCV 1. build some samples included in OpenCV: 切换到opencv下载解压后的文件夹目录下,然后进入sample/c/目录下,编译样例文件 $ cd ~/opencv-2.4.9/samples/c $ chmod +x build_all.sh $ ./build_all.sh 执行完成后,会生成对应的可执行文件 2. 运行其中一个样例 $ ./find_obj 显示执行结果即可。 3. These examples use the old C interface: $ ./facedetect --cascade="/usr/local/share/OpenCV/haarcascades/haarcascade_frontalface_alt.xml" --scale=1.5 lena.jpg $ ./facedetect --cascade="/usr/local/share/OpenCV/haarcascades/haarcascade_frontalface_alt.xml" --nested-cascade="/usr/local/share/OpenCV/haarcascades/haarcascade_eye.xml" --scale=1.5 lena.jpg 4. The following examples use the new C++ interface: $ ~/opencv-2.4.9/build/bin/cpp-example-grabcut ~/opencv-2.4.9/samples/cpp/lena.jpg $ ~/opencv-2.4.9/build/bin/cpp-example-calibration_artificial 5. run some Python code: $ python ~/opencv-2.4.9/samples/python2/turing.py 6. read a video and use OpenGL with Qt through this great sample that detects the features from the video, then estimates the 3D location of the structure using POSIT, and finally uses OpenGL to draw in 3D (great sample Javier): $ cd ~/opencv-2.4.9/samples/cpp/Qt_sample $ mkdir build $ cd build $ cmake .. $ make $ ./OpenGL_Qt_Binding 7. build a sample using the 3D visualization module viz: $ cd ~/opencv-2.4.9/samples/cpp/tutorial_code/viz $ g++ -o widget_pose `pkg-config opencv --cflags` widget_pose.cpp `pkg-config opencv --libs` $ ./widget_pose 8. hello word $ mkdir ~/prj-opencv $ cd ~/prj-opencv $ vi testopencv.c #include
#include

using namespace cv;

int main(int argc, char* argv[])
{
Mat image;
image = imread(argv[1], 1);

if (argc != 2 || !image.data)
{
printf(“No image datan”);
return -1;
}

namedWindow(“Display”, CV_WINDOW_AUTOSIZE);
imshow(“Display”, image);
waitKey(0);
return 0;
}

then,
$ vi CMakeLists.txt
project(testopencv)
find_package(OpenCV REQUIRED)
add_executable(testopencv testopencv)
target_link_libraries(testopencv ${OpenCV_LIBS})
cmake_minimum_required(VERSION 2.8)

$ cmake .
$ make

$ ./testopencv one.jpg

PART – V CONCLUSION

now you can use OpenCV with C++, C, Python, and Java. The Qt enhanced 2D interface is enabled, 3D data can be displayed using OpenGL directly, or using the new viz module. Multi threading functionality is enabled using TBB. Also, video support is enabled as well.

REF: http://www.samontab.com/web/2014/06/installing-opencv-2-4-9-in-ubuntu-14-04-lts/