Please enable JavaScript.
Coggle requires JavaScript to display documents.
GETTING STARTED WITH NEURAL NETWORK - Coggle Diagram
GETTING STARTED WITH NEURAL NETWORK
COVER
INTRODUCTION
KERAS
SETTING UP
DEEP-LEARNING WORKSTATION
CORE COMPONENTS
NEURAL NETWORKS
NEURAL NETWORK
SOLVE
BASIC CLASSIFICATION
REGRESSION
INTRODUCTION
SOLVE
REAL PROBLEMS
MULTICLASS CLASSIFICATION
SCALAR REGRESSION
BINARY CLASSIFICATION
CORE COMPONENTS
NETWORKS
OBJECTIVE FUNCTIONS
LAYERS
OPTIMIZERS
INTRODUCTION
KERAS
PYTHON
DEEP-LEARNING LIBRARY
DEEP-LEARNING WORKSTATION
KERAS
GPU SUPPORT
TENSORFLOW
EXAMPLES
NEURAL NETWORKS
CLASSIFY
MOVIE REVIEWS
POSITIVE
NEGATIVE
NEW WIRES
TOPIC(MULTICLASS CLASSIFICATION)
ESTIMATING
PRICE
HOUSE
REAL ESTATE DATA
SUMMARY
HANDLE
MACHINE-LEARNING
TASKS
VECTOR DATA
MULTICLASS CLASSIFICATION
SCALAR REGRESSION
BINARY CLASSIFICATION
PREPROCESS
RAW DATA
BEFORE FEEDING
NEURAL NETWORK
DATA
FEATURES
DIFFERENT RANGES
SCALE
FEATURE
INDEPENDENTLY
TRAINING
PROGRESSES
NEURAL NETWORK
OVERFIT
OBTAIN
WORSE RESULTS
FEW
DATA
SMALL NETWORK
ONE
HIDDEN LAYERS
AVOID
OVERFITTING
TWO
DATA
MANY CATEGORIES
INFORMATION BOTTLENECKS
INTERMEDIATE LAYERS
TOO SMALLS
LITTLE
K-FOLD VALIDATION
EVALUATE
MODEL
REGRESSION
DIFFERENT LOSS FUNCTION
CLASSIFICATION
DIFFERENT EVALUATION METRICS
ANATOMY
TRAINING
INPUT DATA
TARGETS
LOSS FUNCTION
FEEDBACK SIGNAL
LEARNING
LAYERS
OPTIMIZER
DETERMINES
LEARNING
PROCEEDS
NETWORK
LAYERS
MAPS
INPUT DATA
PREDICTION
LOSS FUNCTION
COMPARES
PREDICTION
TARGET
MEASURE
NETWORK'S PREDICTION
MATCH
TARGET
OPTIMIZER
LOSS VALUE
UPDATE
NETWORK'S WEIGHT
LAYERS
DATA STRUCTURE
DATA PROCESSING MODULE
INPUT
TENSORS
OUTPUT
TENSORS
DIFFERENT TYPES
DIFFERENT DATA FORMAT
RECURRENT LAYER
SEQUENCE DATA
LTSM LAYER
2D CONVOLUTION LAYER
4D TENSORS
DENSELY CONNECTED LAYERS
VECTOR DATA
STATELESS
STATE
WEIGHT
KNOWLEDGE
NETWORK
KERAS
CLIP
LAYERS
TOGETHER
MODELS. NETWORK LAYERS
COMMON
MAPPING
SINGLE INPUT
LAYERS
SINGLE OUTPUT
LINEAR STACK
TYPOLOGIES
TYPES
INCEPTION BLOCKS
TWO-BRANCH NETWORKS
MULTIHEAD NETWORKS
NETWORK
HYPOTHESIS SPACE
SPECIFIC SERIES
TENSOR OPERATIONS
MAPPING
INPUT
1 more item...
LOSS FUNCTION
OPTIMIZERS
CONFIGURE
LEARNING PROCESS
MULTIPLE OUTPUTS
MULTIPLE LOSS FUNCTION
GRADIENT DESCENT PROCESS
SINGLE SCALAR LOSS VALUE
ALL LOSSES
AVERAGING
SINGLE SCALAR QUANTITY
COMMON PROBLEMS
CATEGORICAL CROSSENTROPY
MANY CLASS CLASSIFICATION PROBLEM
MEAN SQUARE ERROR
REGRESSION PROBLEM
BINARY CROSSENTROPHY
TWO CLASS CLASSIFICATION
CONNECTIONIST TEMPORAL CLASSIFICATION
SEQUENCE LEARNING PROBLEM
INTRODUCTION KERAS
KERAS
DEEP-LEARNING FRAMEWORK
DEFINE
DEEP LEARNING
MODEL
TRAIN
CODE
RUNNING
GPU
CPU
API
PROTOTYPE
DEEP LEARNING MODELS
ARBITRARY NETWORK ARCHINECTURES
MULTI-INPUT
MODELS
MULTI-OUTPUT
BUILD-IN SUPPORT
CONVOLUTIONAL NETWORK
RECURRENT NETWORK
COMBINATION
BOTH
DEVELOPING
KERAS
DEFINE
TRAINING DATA
INPUT TENSOR
TARGET TENSOR
NETWORK
LAYERS
MAPS
INPUT
TARGETS
CONFIGURE
LEARNING PROCESS
OPTIMIZER
METRICS
MONITOR
LOSS FUNCTION
MODEL
SEQUENTIAL CLASS
FUNCTIONAL API
NEXT CHAPTER
INTUITION
TYPE
ARCHITECTURE
DIFFERENT
PROBLEMS
RIGHT
LEARNING CONFIGURATION
TWEAK
MODEL
RESULTS
DESIRED
EXAMPLES
CLASSIFICATION
TWO CLASS
MANY CLASS
REGRESSION
DEEP LEARNING WORKSTATION
USE
MODERN GPU
JYPUTER NOTEBOOK
CLASSIFYING MOVIE REVIEWS
SUMMARY
PREPROCESS
RAW DATA
TO FEED
TENSORS
NEURAL NETWORK
SEQUENCE
WORDS
ENCODED
BINARY VECTORS
DENSE LAYERS
RELU ACTIVATION
SOLVE
PROBLEMS
BINARY CLASSIFICATION
DENSE LAYER
LAST
ONE UNIT
SIGMOID ACTIVATION
LOSS FUNCTION
binary_crossentropy
OUTPUT
SCALAR
1
0
ENCODING
PROBABILITY
IMPROVE
TRAINING DATA
OVERFITTING
WORSE RESULTS
MONITOR
PERFORMANCE
DATA
OUTSIDE
TRAINING SET
POSITIVE
TEXT CONTENT
REVIEW
NEGATIVE
IBM DATASET
TEST DATA
TRAINING DATA
SEPARATE
REVIEWS(SEQUENCE OF WORDS)
SEQUENCE INTEGERS
PREPARE DATA
PAD
LISTS
SAME LENGTH
TURN
INTEGER
TENSOR SHAPE
LIST
VECTORS
1
0
FLOATING-POINT VECTOR DATA
FIRST LAYER
DENSE LAYER
FIRST LAYER (EMBEDDING LAYER)
EXAMPLE
VECTOR
[3,5]
10000 DIMENSIONAL VECTOR
1
3
5
0
OTHER VECTOR SQUARES
BUILDING NETWORK
DENSE
LAYERS
RELU ACTIVATION
LABELS
VECTORS
INPUT DATA
VECTORS
Dense(16,
activation='relu')
16
NUMBER
HIDDEN UNIT
1 more item...
RELU
ACTIVATION FUNCTION
MAKE ZERO
NEGATIVE VALUES
RECTIFIED
LINEAR UNIT FUNCTION
1 more item...
WITHOUT
POORER
1 more item...
TOO RESTRICTED
1 more item...
LAYERS
TWO LAYERS
16 HIDDEN UNIT EACH
THIRD LAYER
OUTPUT
SCALAR PREDICTION
CHOOSE
NUMBER
LAYER
HIDDEN UNIT
LAYER
MORE UNITS
MORE
COMPLEX PATTERN REPRESENTATION
EXPENSIVE
UNWANTED PATTERNS
SIGMOID FUNCTION
SQUEESES
RANDOM VALUES
[0,1] INTERVAL
WITHOUT RELU
LEARN
TWO LINEAR OPERATIONS
HYPOTHESIS SPACE
ALL POSSIBLE LINEAR TRANSFORMATION
16-DIMENSIONAL SPACE
RESTRICTED
TWO CLASS-CLASSIFICATION
BINARY CLASSIFICATION
QUESTIONS
QUESTION 1
WHERE
KNOWLEDGE
STORED
EXPERIENCE
QUESTION 2
WHAT
FUNCTION
LAYERS
QUESTION 3
HOW
LISTS
CLASSIFIED
CLASSIFYING NEWSWIRES
OVERVIEW
SINGLE LABEL
MULTICLASS CLASSIFICATION
EACH POINT
CONSISTING CATEGORY
MULTILABEL
NETWORK
CLASSIFY
REUTER NEWSWIRES
DATA POINT
MULTIPLE CATEGORIES
MULTILABEL
MULTICLASS CLASSIFICATION
SUMMARY
LAST
DENSE LAYER
SIZE N
CLASSIFY
DATA POINTS
N CATEGORY
SINGLE LABEL
MULTICLASS CLASSIFICATION
NETWORK
END
SOFTMAX ACTIVATION
PROBABILITY DISTRIBUTION
N OUTPUT CLASSES
ENCODE LABELS
CATEGORICAL ENCODING
CATEGORICAL CROSSENTROPHY
LOSS FUNCTION
INTEGERS
SPARSE CATEGORICAL CROSSENTROPHY
LOSS FUNCTION
AVOID
INFORMATION BOTTLENECKS
INTERMEDIATE LAYERS
TOO SMALL
CROSSENTROPHY
LOSS FUNCTION
MINIMISE
DISTANCE
PROPABILITY DISTRIBUTION
NETWORK
TRUE DISTRIBUTION
TARGET
PREPARE THE DATA
VECTORISE
LABELS
CAST
LABEL LIST
INTEGER TENSORS
LOSS FUNCTION
SPARSE_CATEGORICA_CROSSENTROPHY
ONE-HOT ENCODING
CATEGORICAL DATA
LABEL
ENCODED
VECTORS
BUILD NETWORK
CLASSIFY
SHORT SNIPPETS
TEXT
DENSE LAYER
ACCESS
INFORMATION
OUTPUT
PREVIOUS LAYER
DROP
REVELANT INFORMATION
CLASSIFICATION
NEVER RECOVERED
BOTTLENECK
16 DIMENSIONS
LIMITED
NOTE
END
NETWORK
DENSE LAYER
OUTPUT
46 DIMENSIONAL VECTOR
1 more item...
SOFTMAX ACTIVATION
PROBABILITY DISTRIBUTION
1 more item...
EVERY INPUT
46 DIMENSION OUTPUT VECTOR
LOSS FUNCTION
CATEGORICAL_CROSSENTROPHY
46 HIDDEN LAYERS
BOTTLENECK CASE
LESS
46 HIDDEN LAYERS
EXAMPLE
4 DIMENSION
WEAKER PERFORMANCE
PREDICTION
NEW DATA
REGRESSION EXAMPLE
SUMMARY
REGRESSION
DIFFERENT
LOSS FUNCTION
EVALUATION METRICS
PREPROCESSING STEP
FEATURES
INPUT DATA
DIFFERENT RANGE
VALUES
FEATURES
SCALED
1 more item...
AVAILABLE
LITTLE DATA
K-FOLD VALIDATION
EVALUATE
MODEL
SMALL NETWORK
FEW HIDDEN LAYERS
AVOID
SEVERE OVERFITTING
PREDICT
CONTINUOUS VALUES
BOSTON HOUSE MID-1970
PREPARE THE DATA
PROBLEMATIC
FEED
NEURAL NETWORK
VALUES
WILDLY DIFFERENT RANGE
SOLUTION
FEATURE WISE NORMALISATION
TRAIN DATA
SUBSTRACTED
MEAN
DIVIDE
STANDARD DEVIATION
NEARER
0
BUILDING NEURAL NETWORK
LAST LAYER
SINGLE REPRESENTATION DIMENSION
LINEAR LAYER
PREDICT
VALUE
FREELY
SCALAR REGRESSION
PREDICT
SINGLE CONTINUOUS VALUE
NO ACTIVATION
SIGMOID
ACTIVATION
PREDICT
ONLY
1
0
NETWORK
LOSS FUNCTION
MEAN SQUARED ERROR
SQUARE
DIFFERENCE
PREDICTION
TARGET
MEAN ABSOLUTE ERROR (MAE)
ABSOLUTE VALUE
DIFFERENCE
PREDICTION
TARGET
VALIDATION
DATA
DIVIDED
TRAIN
VALIDATION
LITTLE
DATA
LITTLE
VALIDATION SET
CHOICE
VALIDATION SCORE
CHANGE
A LOT
K-FOLD VALIDATION
DIVIDE DATA
K PARTITION
K IDENTICAL DATA
TRAIN
TRAINING PARTITION
EVALUATE
REMAINING ONE
FIND
AVERAGE
VALIDATIONS