Model Implementation
in Tensorflow on Tensorflow
Sequential
x_train=tf.random.normal(shape=(1000,1),dtype=tf.float32)
y_train=3*x_train+1+0.2*tf.random.normal(shape=(1000,1),dtype=tf.float32)
x_test=tf.random.normal(shape=(300,1),dtype=tf.float32)
y_test=3*x_test+1+0.2*tf.random.normal(shape=(300,1),dtype=tf.float32)
fig,ax=plt.subplots(figsize=(10,10))
ax.scatter(x_train.numpy(),
y_train.numpy())
ax.tick_params(labelsize=20)
ax.grid()
- train,test 데이터셋 생성
- target y에는 노이즈를 추가
model=tf.keras.models.Sequential([
tf.keras.layers.Dense(units=1,activation='linear')
])
model.compile(loss='mean_squared_error',
optimizer='SGD')
model.fit(x_train,y_train,epochs=10)
print()
model.evaluate(x_test,y_test)
Epoch 1/10
32/32 [==============================] - 0s 424us/step - loss: 2.0352
Epoch 2/10
32/32 [==============================] - 0s 418us/step - loss: 0.5651
Epoch 3/10
32/32 [==============================] - 0s 421us/step - loss: 0.1755
Epoch 4/10
32/32 [==============================] - 0s 457us/step - loss: 0.0744
Epoch 5/10
32/32 [==============================] - 0s 494us/step - loss: 0.0479
Epoch 6/10
32/32 [==============================] - 0s 428us/step - loss: 0.0411
Epoch 7/10
32/32 [==============================] - 0s 453us/step - loss: 0.0392
Epoch 8/10
32/32 [==============================] - 0s 413us/step - loss: 0.0386
Epoch 9/10
32/32 [==============================] - 0s 509us/step - loss: 0.0384
Epoch 10/10
32/32 [==============================] - 0s 430us/step - loss: 0.0384
10/10 [==============================] - 0s 508us/step - loss: 0.0417
- keras 모델을 적극활용
- compile을 통해 loss,optimizer 적용
Model Sub Classing
x_train=tf.random.normal(shape=(10,1),dtype=tf.float32)
y_train=3*x_train+1+0.2*tf.random.normal(shape=(10,1),dtype=tf.float32)
x_test=tf.random.normal(shape=(300,1),dtype=tf.float32)
y_test=3*x_test+1+0.2*tf.random.normal(shape=(300,1),dtype=tf.float32)
- Model Sub Classing 방법으로 더욱 복잡한 네트워크 구성 가능
- 잔차학습이 대표적인 예시
class LinearPredictor(tf.keras.Model):
def __init__(self):
super(LinearPredictor,self).__init__()
self.d1=tf.keras.layers.Dense(units=1,activation='linear')
def call(self,x):
x=self.d1(x)
return x
EPOCHS=10
LR=0.01
model=LinearPredictor()
criterion=tf.keras.losses.MeanSquaredError()
optimizer=tf.keras.optimizers.SGD(learning_rate=LR)
- pytorch와 비슷한 형태
- tf.keras.Model을 상속받고 call함수가 forward 함수 역할
for epoch in range(EPOCHS):
for x,y in zip(x_train,y_train):
x=tf.reshape(x,(1,1))
with tf.GradientTape() as tape:
predictions=model(x)
loss=criterion(y,predictions)
gradients=tape.gradient(loss,model.trainable_variables)
optimizer.apply_gradients(zip(gradients,model.trainable_variables))
print(epoch+1)
print(loss)
1
tf.Tensor(30.546247, shape=(), dtype=float32)
2
tf.Tensor(17.309853, shape=(), dtype=float32)
3
tf.Tensor(10.248536, shape=(), dtype=float32)
4
tf.Tensor(6.353091, shape=(), dtype=float32)
5
tf.Tensor(4.123877, shape=(), dtype=float32)
6
tf.Tensor(2.7975774, shape=(), dtype=float32)
7
tf.Tensor(1.9762115, shape=(), dtype=float32)
8
tf.Tensor(1.4467182, shape=(), dtype=float32)
9
tf.Tensor(1.091786, shape=(), dtype=float32)
10
tf.Tensor(0.8449189, shape=(), dtype=float32)
- gradient tape을 통해서 각 가중치들의 gradient를 optimizer의 일괄 적용