# 06.5 实现逻辑与或非门

## 6.5 实现逻辑与或非门⚓︎

• 与门 AND
• 与非门 NAND
• 或门 OR
• 或非门 NOR
• 非门 NOT

### 6.5.1 实现逻辑非门⚓︎

• $x=0$ 时，$y = -1 \times 0 + 1 = 1$
• $x=1$ 时，$y = -1 \times 1 + 1 = 0$

y=wx+b = -1 \times 0.5 + 1 = 0.5

1 0 1
2 1 0

    def Read_Logic_NOT_Data(self):
X = np.array([0,1]).reshape(2,1)
Y = np.array([1,0]).reshape(2,1)
self.XTrain = self.XRaw = X
self.YTrain = self.YRaw = Y
self.num_train = self.XRaw.shape[0]


num_input = 1
num_output = 1


......
2514 1 0.0020001369266925305
2515 1 0.0019993382569061806
W= [[-12.46886021]]
B= [[6.03109791]]
[[0.99760291]
[0.00159743]]


y = -12.468x + 6.031

### 6.5.2 实现逻辑与或门⚓︎

1 0 0 0 1 0 1
2 0 1 0 1 1 0
3 1 0 0 1 1 0
4 1 1 1 0 1 0

#### 读取数据⚓︎

class LogicDataReader(SimpleDataReader):
X = np.array([0,0,0,1,1,0,1,1]).reshape(4,2)
Y = np.array([0,0,0,1]).reshape(4,1)
self.XTrain = self.XRaw = X
self.YTrain = self.YRaw = Y
self.num_train = self.XRaw.shape[0]

......

......

......


#### 测试函数⚓︎

def Test(net, reader):
A = net.inference(X)
print(A)
diff = np.abs(A-Y)
result = np.where(diff < 1e-2, True, False)
if result.sum() == 4:
return True
else:
return False


#### 训练函数⚓︎

def train(reader, title):
...
params = HyperParameters(eta=0.5, max_epoch=10000, batch_size=1, eps=2e-3, net_type=NetType.BinaryClassifier)
num_input = 2
num_output = 1
net = NeuralNet(params, num_input, num_output)
# test
......


#### 运行结果⚓︎

......
epoch=4236
4236 3 0.0019998012999365928
W= [[11.75750515]
[11.75780362]]
B= [[-17.80473354]]
[[9.96700157e-01]
[2.35953140e-03]
[1.85140939e-08]
[2.35882891e-03]]
True


### 6.5.3 结果比较⚓︎

W=-12.468
B=6.031
W1=11.757
W2=11.757
B=-17.804

W2=-11.763
B=17.812
W1=11.743
W2=11.743
B=-11.738

W2=-11.738
B=5.409

1. W1W2的值基本相同而且符号相同，说明分割线一定是135°斜率
2. 精度越高，则分割线的起点和终点越接近四边的中点0.5的位置

ch06, Level4

### 思考与练习⚓︎

1. 减小max_epoch的数值，观察神经网络的训练结果。
2. 为什么达到相同的精度，逻辑OR和NOR只用2000次左右的epoch，而逻辑AND和NAND却需要4000次以上？