"VGGNet"의 두 판 사이의 차이
		
		
		
		
		
		둘러보기로 가기
		검색하러 가기
		
				
		
		
	
Pythagoras0 (토론 | 기여)  (→노트:  새 문단)  | 
				Pythagoras0 (토론 | 기여)   | 
				
(차이 없음) 
 | |
2020년 12월 22일 (화) 04:26 기준 최신판
노트
말뭉치
- VGGNet is a Convolutional Neural Network architecture proposed by Karen Simonyan and Andrew Zisserman from the University of Oxford in 2014.[1]
 - Another variation of VGGNet has 19 weight layers consisting of 16 convolutional layers with 3 fully connected layers and same 5 pooling layers.[1]
 - In both variation of VGGNet there consists of two Fully Connected layers with 4096 channels each which is followed by another fully connected layer with 1000 channels to predict 1000 labels.[1]
 - VGGNet is a neural network that performed very well in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) in 2014.[2]
 - VGGNet apparently took 2-3 weeks to train on a computer with four NVIDIA Titan Black GPUs.[2]
 - Let’s say you want to train a network such as VGGNet to recognize faces of celebrities.[2]
 - Since the VGGNet we’re using was trained on ImageNet, it’s really good at distinguishing between different breeds of dogs, different types of fish, and so on.[2]
 - VGGNet is invented by Visual Geometry Group (by Oxford University).[3]
 - The reason to understand VGGNet is that many modern image classification models are built on top of this architecture.[3]
 - It achieves better accuracy than VGGNet and GoogLeNet while being computationally more efficient than VGGNet.[4]
 - The architecture is similar to the VGGNet consisting mostly of 3X3 filters.[4]
 - From the VGGNet, shortcut connection as described above is inserted to form a residual network.[4]
 - VGGNet was a competitor in the ImageNet ILSVRC-2014 image classification competition and scored second place.[5]
 - The runner-up in ILSVRC 2014 was the network from Karen Simonyan and Andrew Zisserman that became known as the VGGNet.[6]
 - A downside of the VGGNet is that it is more expensive to evaluate and uses a lot more memory and parameters (140M).[6]
 - Lets break down the VGGNet in more detail as a case study.[6]
 - The whole VGGNet is composed of CONV layers that perform 3x3 convolutions with stride 1 and pad 1, and of POOL layers that perform 2x2 max pooling with stride 2 (and no padding).[6]
 - They compare HybridNet, with VGGNet and CBP-CNN, for 292, 100, and 200 sub-classes of VegFru Dataset.[7]
 
소스
- ↑ 1.0 1.1 1.2 VGGNet Architecture Explained
 - ↑ 2.0 2.1 2.2 2.3 Convolutional neural networks on the iPhone with VGGNet
 - ↑ 3.0 3.1 What is the VGG neural network?
 - ↑ 4.0 4.1 4.2 ResNet, AlexNet, VGGNet, Inception: Understanding various architectures of Convolutional Networks – CV-Tricks.com
 - ↑ hollance/VGGNet-Metal: iPhone version of the VGGNet convolutional neural network for image recognition
 - ↑ 6.0 6.1 6.2 6.3 CS231n Convolutional Neural Networks for Visual Recognition
 - ↑ A Review of Convolutional Neural Network Applied to Fruit Image Processing