Oracle GI 12.1.0.2 and Chronyd Service

After installed or upgraded to Oracle GI 12.1.0.2, chrony is not supported for some reason.

[grid@racnode2 bin]$ systemctl status chronyd

● chronyd.service - NTP client/server
   Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; 
                                             vendor preset: enabled)
   Active: active (running) since Fri 2020-01-10 16:28:40 AEDT; 
                                                         1h 19min ago
     Docs: man:chronyd(8)
           man:chrony.conf(5)
  Process: 1184 ExecStartPost=/usr/libexec/chrony-helper 
                        update-daemon (code=exited, status=0/SUCCESS)
  Process: 1152 ExecStart=/usr/sbin/chronyd $OPTIONS 
                                    (code=exited, status=0/SUCCESS)
 Main PID: 1166 (chronyd)
   CGroup: /system.slice/chronyd.service
           └─1166 /usr/sbin/chronyd
[root@racnode2 bin]# ./crsctl check css
CRS-4529: Cluster Synchronization Services is online
[root@racnode2 bin]# ./crsctl check ctss
CRS-4701: The Cluster Time Synchronization Service is in Active mode.
CRS-4702: Offset (in msec): 0
[grid@racnode2 bin]$ crsctl stat res -t -init
-------------------------------------------------------------------
Name           Target  State        Server             State details
-------------------------------------------------------------------
Cluster Resources
-------------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       racnode2          Started,STABLE
ora.cluster_interconnect.haip
      1        ONLINE  ONLINE       racnode2                 STABLE
ora.crf
      1        ONLINE  ONLINE       racnode2                 STABLE
ora.crsd
      1        ONLINE  ONLINE       racnode2                 STABLE
ora.cssd
      1        ONLINE  ONLINE       racnode2                 STABLE
ora.cssdmonitor
      1        ONLINE  ONLINE       racnode2                 STABLE
ora.ctssd
      1        ONLINE  ONLINE       racnode2          ACTIVE:0,STABLE
...
..
.

octssd.trc

2020-01-10 18:47:45.647193 : CTSS:1717679872: sclsctss_gvss2: 
                                      NTP default pid file not found
2020-01-10 18:47:45.647198 : CTSS:1717679872: ctss_check_vendor_sw: 
              Vendor time sync software is not detected. status [1].

CAUSE

It is a bug according oracle.

Workaround

Apply patch 20936562, which is included in majority GI PSUs since GI PSU 160719 (July 2016)

[grid@racnode2 tmp]$ opatch lsinventory | grep 20936562
19471836, 24445255, 20936562, 28805158, 25037011, 22144696, 18750781

[grid@racnode2 tmp]$ $ORACLE_HOME/OPatch/opatch lsinventory
...
..
.
Unique Patch ID: 22886676
Patch description: "OCW PATCH SET UPDATE 12.1.0.2.190716 (29509318)"
Created on 27 Jun 2019, 08:01:08 hrs PST8PDT
...
..
.
[grid@racnode2 trace]$ crsctl check css
CRS-4529: Cluster Synchronization Services is online

[grid@racnode2 trace]$ crsctl check ctss
CRS-4700:The Cluster Time Synchronization Service is in Observer mode
[grid@racnode2 trace]$ crsctl stat res -t -init
-------------------------------------------------------------------
Name           Target  State        Server             State details
-------------------------------------------------------------------
Cluster Resources
------------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       racnode2           Started,STABLE
ora.cluster_interconnect.haip
      1        ONLINE  ONLINE       racnode2                 STABLE
ora.crf
      1        ONLINE  ONLINE       racnode2                 STABLE
ora.crsd
      1        ONLINE  ONLINE       racnode2                 STABLE
ora.cssd
      1        ONLINE  ONLINE       racnode2                 STABLE
ora.cssdmonitor
      1        ONLINE  ONLINE       racnode2                 STABLE
ora.ctssd
      1        ONLINE  ONLINE       racnode2         OBSERVER,STABLE
...
..
.

octssd.trc

2020-01-10 20:52:58.219654 :  CTSS:1530910464: sclsctss_gvss5: 
                              Chrony active, forcing observer mode
2020-01-10 20:52:58.219657 :  CTSS:1530910464: ctss_check_vendor_sw: 
                   Vendor time sync software is detected. status [2].

Just keep in mind cluvfy still checks NTPD “/etc/ntp.conf” configurations. But it is not a big deal.

[grid@racnode2]$ cluvfy comp clocksync -n all -verbose

Verifying Clock Synchronization across the cluster nodes

Checking if Clusterware is installed on all nodes...
Oracle Clusterware is installed on all nodes.

Checking if CTSS Resource is running on all nodes...
Check: CTSS Resource running on all nodes
  Node Name                             Status
  ------------------------------------  ------------------------
  racnode2                              passed
  racnode1                              passed
CTSS resource check passed

Querying CTSS for time offset on all nodes...
Query of CTSS for time offset passed

Check CTSS state started...
Check: CTSS state
  Node Name                             State
  ------------------------------------  ------------------------
  racnode2                              Observer
  racnode1                              Observer
CTSS is in Observer state. Switching over to clock synchronization 
                                                  checks using NTP


Starting Clock synchronization checks using Network Time Protocol(NTP)...

Checking existence of NTP configuration file "/etc/ntp.conf" across 
nodes:
  Node Name                             File exists?
  ------------------------------------  ------------------------
  racnode2                              no
  racnode1                              no
PRVG-1019 : The NTP configuration file "/etc/ntp.conf" does not exist 
            on nodes "racnode2,racnode1"
PRVF-5414 : Check of NTP Config file failed on all nodes. 
            Cannot proceed further for the NTP tests

Checking daemon liveness...

Check: Liveness for "ntpd"
  Node Name                             Running?
  ------------------------------------  ------------------------
  racnode2                              no
  racnode1                              no
PRVF-7590 : "ntpd" is not running on node "racnode2"
PRVF-7590 : "ntpd" is not running on node "racnode1"
PRVG-1024 : The NTP Daemon or Service was not running on any of the 
            cluster nodes.
PRVF-5415 : Check to see if NTP daemon or service is running failed
Result: Clock synchronization check using Network Time Protocol(NTP) 
        failed

PRVF-9652 : Cluster Time Synchronization Services check failed

Verification of Clock Synchronization across the cluster nodes was 
unsuccessful on all the specified nodes.

xfs_growfs: /dev/ol/root is not a mounted XFS filesystem

On Oracle Linux 7.7, trying to run “xfs_growfs’, and get errors.

[root@racnode1 ~]# xfs_growfs /dev/ol/root
xfs_growfs: /dev/ol/root is not a mounted XFS filesystem

It seems the syntax has been changed for this release. The mount point is required now instead of LV, like /dev/ol/root.

[root@racnode1 ~]# df -h
Filesystem           Size  Used Avail Use% Mounted on
devtmpfs             4.8G     0  4.8G   0% /dev
tmpfs                2.0G  4.0K  2.0G   1% /dev/shm
tmpfs                4.8G  8.7M  4.8G   1% /run
tmpfs                4.8G     0  4.8G   0% /sys/fs/cgroup
/dev/mapper/ol-root   27G   17G   11G  62% /
/dev/sda1            497M  120M  377M  25% /boot
tmpfs                973M     0  973M   0% /run/user/0
tmpfs                973M     0  973M   0% /run/user/54321
[root@racnode1 ~]# xfs_info /
meta-data=/dev/mapper/ol-root    isize=256    agcount=4, agsize=1734656 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0 spinodes=0 rmapbt=0
         =                       reflink=0
data     =                       bsize=4096   blocks=6938624, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=3388, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@racnode1 ~]#  xfs_growfs /
meta-data=/dev/mapper/ol-root    isize=256    agcount=4, agsize=1734656 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0 spinodes=0 rmapbt=0
         =                       reflink=0
data     =                       bsize=4096   blocks=6938624, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=3388, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 6938624 to 15974400

We can confirm root file system (/) size has been increased from 27GB to 61GB.

[root@racnode1 ~]# df -h
Filesystem           Size  Used Avail Use% Mounted on
devtmpfs             4.8G     0  4.8G   0% /dev
tmpfs                2.0G  4.0K  2.0G   1% /dev/shm
tmpfs                4.8G  8.7M  4.8G   1% /run
tmpfs                4.8G     0  4.8G   0% /sys/fs/cgroup
/dev/mapper/ol-root   61G   17G   45G  27% /
/dev/sda1            497M  120M  377M  25% /boot
tmpfs                973M     0  973M   0% /run/user/0
tmpfs                973M     0  973M   0% /run/user/54321

How to Increase Linux Swap Logical Volume ( LVM ) Size

Check which Volume Group ( VG) the swap LV is using.

[root@racnode1 ~]# lvdisplay
--- Logical volume ---
LV Path /dev/ol/swap
LV Name swap
VG Name ol
LV UUID epJhR9-0sM6-bK6L-qubv-lrJb-0P0X-2Ab5fD
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2015-11-24 00:34:14 +1100
LV Status available
# open 2
LV Size 3.00 GiB
Current LE 768
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 252:1

Make sure VG ol has enough free space for swap LV to extend.

[root@racnode1 ~]# vgdisplay
--- Volume group ---
VG Name ol
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 5
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 2
Act PV 2
VG Size 95.50 GiB
PE Size 4.00 MiB
Total PE 24449
Alloc PE / Size 7544 / <29.47 GiB
Free PE / Size 16905 / <66.04 GiB
VG UUID 55lr8l-8d38-GGIX-j0Cm-InKP-Jiyb-refLmt

Disable swapping for the associated logical volume.

[root@racnode1 ~]# swapoff -v  /dev/ol/swap
swapoff /dev/ol/swap

[root@racnode1 ~]# free -g
              total   used      free   shared  buff/cache   available
Mem:              9      0         9        0           0           9
Swap:             0      0         0
[root@racnode1 ~]#

Resize the logical volume. I’m going to increase the swap volume from 3G to 8G .

[root@racnode1 ~]# lvresize /dev/ol/swap -L +5G --test
  TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated.
  Size of logical volume ol/swap changed from 3.00 GiB (768 extents) to 8.00 GiB (2048 extents).
  Logical volume ol/swap successfully resized.
[root@racnode1 ~]# lvresize /dev/ol/swap -L +5G
  Size of logical volume ol/swap changed from 3.00 GiB (768 extents) to 8.00 GiB (2048 extents).
  Logical volume ol/swap successfully resized.

Format the new swap space.

[root@racnode1 ~]# mkswap /dev/ol/swap
mkswap: /dev/ol/swap: warning: wiping old swap signature.
Setting up swapspace version 1, size = 8388604 KiB
no label, UUID=3bfcff72-20c3-4e1e-ad38-20de7cca2050
[root@racnode1 ~]#

Enable the extended logical volume.

[root@racnode1 ~]# swapon -va
swapon /dev/mapper/ol-swap
swapon: /dev/mapper/ol-swap: found swap signature: version 1, page-size 4, same byte order
swapon: /dev/mapper/ol-swap: pagesize=4096, swapsize=8589934592, devsize=8589934592
[root@racnode1 ~]#

Test that the logical volume has been extended properly.

[root@racnode1 ~]# free -g
              total        used        free      shared  buff/cache   available
Mem:              9           0           9           0           0           9
Swap:             7           0           7
[root@racnode1 ~]# cat /proc/swaps
Filename                  Type            Size    Used    Priority
/dev/dm-1                 partition       8388604 0       -2
[root@racnode1 ~]#

Only one VG ( ol ).

[root@racnode1 ~]# vgdisplay
--- Volume group ---
VG Name ol
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 6
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 2
Act PV 2
VG Size 95.50 GiB
PE Size 4.00 MiB
Total PE 24449
Alloc PE / Size 8824 / <34.47 GiB
Free PE / Size 15625 / <61.04 GiB
VG UUID 55lr8l-8d38-GGIX-j0Cm-InKP-Jiyb-refLmt

Two LVs ( swap, root)

[root@racnode1 ~]# lvdisplay

--- Logical volume ---
LV Path /dev/ol/swap
LV Name swap
VG Name ol
LV UUID epJhR9-0sM6-bK6L-qubv-lrJb-0P0X-2Ab5fD
LV Write Access read/write
LV Creation host,time localhost.localdomain,2015-11-24 00:34:14 +1100
LV Status available
# open 2
LV Size 8.00 GiB
Current LE 2048
Segments 3
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 252:1

--- Logical volume ---
LV Path /dev/ol/root
LV Name root
VG Name ol
LV UUID N0D1Hr-FbhE-R7rt-WuUX-9PQU-jVfH-eF5UtL
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2015-11-24 00:34:14 +1100
LV Status available
# open 1
LV Size <26.47 GiB
Current LE 6776
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 252:0

VG ( ol) combines with two PVs ( /dev/sda2, /dev/sda3)

[root@racnode1 ~]# pvdisplay
--- Physical volume ---
PV Name /dev/sda2
VG Name ol
PV Size 29.51 GiB / not usable 3.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 7554
Free PE 0
Allocated PE 7554
PV UUID ifffmk-GSXU-NcKC-vww1-HMF6-Nc36-A6sewK

--- Physical volume ---
PV Name /dev/sda3
VG Name ol
PV Size 66.00 GiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 16895
Free PE 15625
Allocated PE 1270
PV UUID DG6gQe-VQki-1oT2-V1wk-1etX-6qkp-7QP3cX

[root@racnode1 ~]#

Upgrade Unbreakable Enterprise Kernel (UEK) from UEKR3 to UEKR5 on Oracle Linux 7

Before proceeding with the upgrading steps, we just record down what we are having in current Oracle Linux 7.

Server Release Version: 7.2
UEK Release: ol7_UEKR3
ol7_addons repository for various addons such as Docker Engine
ol7_latest repository for the latest releases of OL7 core packages
PostgreSQL 9.6

[root@racnode1 ~]# cat /etc/system-release
Oracle Linux Server release 7.2
[root@racnode1 ~]# yum repolist
Loaded plugins: ulninfo
Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache fast
repo id                         repo name                                   status
!ksplice-uptrack/7Server/x86_64 Ksplice Uptrack for OL                         16
!ol7_UEKR3/x86_64               Latest Unbreakable Enterprise Kernel 
                                Release 3 for Oracle Linux 7Server (x86_64)   912
!ol7_addons/x86_64              Oracle Linux 7Server Add ons (x86_64)         340
!ol7_latest/x86_64              Oracle Linux 7Server Latest (x86_64)       13,211
!pgdg96/7Server/x86_64          PostgreSQL 9.6 7Server - x86_64               875
repolist: 15,354
[root@racnode1 ~]#

Update yum repository files from http://yum.oracle.com/public-yum-ol7.repo which contains the latest repositories information. Make sure to put into the right directory where yum expects it, which is /etc/yum.repos.d/:

[root@racnode1 yum.repos.d]# cd /etc/yum.repos.d
[root@racnode1 yum.repos.d]# ls -ltr
total 16
-rw-r--r-- 1 root root 191 Sep 2 2011 ksplice-uptrack.repo
-rw-r--r-- 1 root root 2560 Nov 20 2015 public-yum-ol7.repo.rpmnew
-rw-r--r-- 1 root root 2323 Nov 28 2015 public-yum-ol7.repo
-rw-r--r-- 1 root root 1012 Sep 21 2016 pgdg-96-redhat.repo
[root@racnode1 yum.repos.d]# wget http://yum.oracle.com/public-yum-ol7.repo
--2020-01-06 11:35:37-- http://yum.oracle.com/public-yum-ol7.repo
Resolving yum.oracle.com (yum.oracle.com)... 104.116.170.250
Connecting to yum.oracle.com (yum.oracle.com)|104.116.170.250|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 16402 (16K) [text/plain]
Saving to: ‘public-yum-ol7.repo.1’

2020-01-06 11:35:51 (646 KB/s) - ‘public-yum-ol7.repo.1’ saved [16402/16402]

The new file is named as “public-yum-ol7.repo.1”, so we have to make a backup for the old one, and rename the new one to old one:

[root@racnode1 yum.repos.d]# ls -ltr public-yum-ol7.repo*
-rw-r--r-- 1 root root 2560 Nov 20 2015 public-yum-ol7.repo.rpmnew
-rw-r--r-- 1 root root 2323 Nov 28 2015 public-yum-ol7.repo
-rw-r--r-- 1 root root 16402 Aug 26 23:57 public-yum-ol7.repo.1
[root@racnode1 yum.repos.d]# mv public-yum-ol7.repo public-yum-ol7.repo.bak
[root@racnode1 yum.repos.d]# mv public-yum-ol7.repo.1 public-yum-ol7.repo
[root@racnode1 yum.repos.d]#

Enable the new repository, and also disable the old one accordingly. Install yum-utils if it has not been, otherwise “yum-config-manager” will not be found.

[root@racnode1 yum.repos.d]# yum-config-manager --enable ol7_UEKR5
-bash: yum-config-manager: command not found


[root@racnode1 yum.repos.d]# yum install yum-utils
...
..
.
Complete!
[root@racnode1 yum.repos.d]#
[root@racnode1 yum.repos.d]# yum-config-manager --enable ol7_UEKR5
===== repo: ol7_UEKR5 =============================================
[ol7_UEKR5]
async = True
bandwidth = 0
base_persistdir = /var/lib/yum/repos/x86_64/7Server
baseurl = https://yum.oracle.com/repo/OracleLinux/OL7/UEKR5/x86_64/
cache = 0
cachedir = /var/cache/yum/x86_64/7Server/ol7_UEKR5
check_config_file_age = True
compare_providers_priority = 80
cost = 1000
deltarpm_metadata_percentage = 100
deltarpm_percentage =
enabled = True
enablegroups = True
exclude =
failovermethod = priority
ftp_disable_epsv = False
gpgcadir = /var/lib/yum/repos/x86_64/7Server/ol7_UEKR5/gpgcadir
gpgcakey =
gpgcheck = True
gpgdir = /var/lib/yum/repos/x86_64/7Server/ol7_UEKR5/gpgdir
gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
hdrdir = /var/cache/yum/x86_64/7Server/ol7_UEKR5/headers
http_caching = all
includepkgs =
ip_resolve =
keepalive = True
keepcache = False
mddownloadpolicy = sqlite
mdpolicy = group:small
mediaid =
metadata_expire = 21600
metadata_expire_filter = read-only:present
metalink =
minrate = 0
mirrorlist =
mirrorlist_expire = 86400
name = Latest Unbreakable Enterprise Kernel Release 5 for Oracle Linux 7
       Server (x86_64)
old_base_cache_dir =
password =
persistdir = /var/lib/yum/repos/x86_64/7Server/ol7_UEKR5
pkgdir = /var/cache/yum/x86_64/7Server/ol7_UEKR5/packages
proxy = False
proxy_dict =
proxy_password =
proxy_username =
repo_gpgcheck = False
retries = 10
skip_if_unavailable = False
ssl_check_cert_permissions = True
sslcacert =
sslclientcert =
sslclientkey =
sslverify = True
throttle = 0
timeout = 30.0
ui_id = ol7_UEKR5/x86_64
ui_repoid_vars = releasever,
basearch
username =
[root@racnode1 yum.repos.d]#
[root@racnode1 yum.repos.d]# yum-config-manager --disable ol7_UEKR3
==================== repo: ol7_UEKR3 ===========================
[ol7_UEKR3]
async = True
bandwidth = 0
base_persistdir = /var/lib/yum/repos/x86_64/7Server
baseurl = https://yum.oracle.com/repo/OracleLinux/OL7/UEKR3/x86_64/
cache = 0
cachedir = /var/cache/yum/x86_64/7Server/ol7_UEKR3
check_config_file_age = True
compare_providers_priority = 80
cost = 1000
deltarpm_metadata_percentage = 100
deltarpm_percentage =
enabled = False
enablegroups = True
exclude =
failovermethod = priority
ftp_disable_epsv = False
gpgcadir = /var/lib/yum/repos/x86_64/7Server/ol7_UEKR3/gpgcadir
gpgcakey =
gpgcheck = True
gpgdir = /var/lib/yum/repos/x86_64/7Server/ol7_UEKR3/gpgdir
gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
hdrdir = /var/cache/yum/x86_64/7Server/ol7_UEKR3/headers
http_caching = all
includepkgs =
ip_resolve =
keepalive = True
keepcache = False
mddownloadpolicy = sqlite
mdpolicy = group:small
mediaid =
metadata_expire = 21600
metadata_expire_filter = read-only:present
metalink =
minrate = 0
mirrorlist =
mirrorlist_expire = 86400
name = Latest Unbreakable Enterprise Kernel Release 3 for Oracle Linux 7Server (x86_64)
old_base_cache_dir =
password =
persistdir = /var/lib/yum/repos/x86_64/7Server/ol7_UEKR3
pkgdir = /var/cache/yum/x86_64/7Server/ol7_UEKR3/packages
proxy = False
proxy_dict =
proxy_password =
proxy_username =
repo_gpgcheck = False
retries = 10
skip_if_unavailable = False
ssl_check_cert_permissions = True
sslcacert =
sslclientcert =
sslclientkey =
sslverify = True
throttle = 0
timeout = 30.0
ui_id = ol7_UEKR3/x86_64
ui_repoid_vars = releasever,
basearch
username =

[root@racnode1 yum.repos.d]# yum repolist
Loaded plugins: ulninfo
repo id                        repo name                      status
ksplice-uptrack/7Server/x86_64 Ksplice Uptrack for OL 16
ol7_UEKR5/x86_64      Latest Unbreakable Enterprise Kernel Release 5
                      for Oracle Linux Server (x86_64)           193
ol7_latest/x86_64     Oracle Linux 7Server Latest (x86_64)    16,060
pgdg96/7Server/x86_64 PostgreSQL 9.6 7Server - x86_64          1,198
repolist: 17,467
[root@racnode1 yum.repos.d]#

We can see ol7_addons repository is not enabled by default. so we have to enable it manually by :

[root@racnode1 yum.repos.d]# yum-config-manager --enable ol7_addons
[root@racnode1 yum.repos.d]# yum repolist
Loaded plugins: ulninfo
repo id                         repo name                      status
ksplice-uptrack/7Server/x86_64  Ksplice Uptrack for OL             16
ol7_UEKR5/x86_64         latest Unbreakable Enterprise Kernel 
                   Release 5for Oracle Linux 7Server (x86_64)     193
ol7_addons/x86_64  Oracle Linux 7Server Add ons (x86_64)          340
ol7_latest/x86_64  Oracle Linux 7Server Latest (x86_64)        16,060
pgdg96/7Server/x86_64 PostgreSQL 9.6 7Server - x86_64           1,198
repolist: 17,807
[root@racnode1 yum.repos.d]#

Upgrade now with yum upgrade option, which will delete the old packages. Otherwise use ‘update’ option to keep the old packages.

[root@racnode1 yum.repos.d]# yum upgrade
Loaded plugins: ulninfo
Resolving Dependencies
-> Running transaction check
->Package NetworkManager.x86_64 1:1.0.6-27.0.1.el7 will be obsoleted
->Package NetworkManager.x86_64 1:1.18.0-5.el7_7.1 will be obsoleting
...

...

Updating : libuser-0.60-9.el7.x86_64 180/733
Updating : postgresql96-libs-9.6.16-2PGDG.rhel7.x86_64 181/733
Updating : passwd-0.79-5.el7.x86_64 182/733
Updating : curl-7.29.0-54.0.1.el7_7.1.x86_64 183/733

IMPORTANT: A legacy Oracle Linux yum server repo file was found. 
Oracle Linux yum server repository configurations have changed which 
means public-yum-ol7.repo will no longer be updated. New repository 
configuration files have been installed but are disabled. 
To complete the transition, run this script as the root user:

/usr/bin/ol_yum_configure.sh

See https://yum.oracle.com/faq.html for more information.

Installing : oraclelinux-release-el7-1.0-8.el7.x86_64 184/733
Updating : gnupg2-2.0.22-5.el7_5.x86_64 185/733
Updating : rhn-client-tools-2.0.2-24.0.7.el7.x86_64

...
..
.

Complete!

Execute below command as per upgrade warning messages:

[root@racnode1 yum.repos.d]# /usr/bin/ol_yum_configure.sh
Repository ol7_UEKR5 already enabled
Repository ol7_addons already enabled
Repository ol7_latest already enabled
[root@racnode1 yum.repos.d]# yum list installed kernel-uek
Loaded plugins: ulninfo
Installed Packages
kernel-uek.x86_64 3.8.13-55.1.6.el7uek @anaconda/7.1
kernel-uek.x86_64 3.8.13-118.2.1.el7uek @ol7_UEKR3
kernel-uek.x86_64 4.14.35-1902.8.4.el7uek @ol7_UEKR5
[root@racnode1 yum.repos.d]#

Reboot the server, and check the upgrading is successful.

[root@racnode1 yum.repos.d]# reboot
login as: root
root@192.168.78.51's password:
Last login: Mon Jan 6 11:03:51 2020 from 192.168.78.1
[root@racnode1 ~]# uname -r
4.14.35-1902.8.4.el7uek.x86_64
[root@racnode1 ~]# cat /etc/system-release
Oracle Linux Server release 7.7
[root@racnode1 ~]#

Uninstall the old kernel(s)

[root@racnode1 ~]# yum list installed kernel*
Loaded plugins: ulninfo
Installed Packages
kernel.x86_64              3.10.0-229.el7            @anaconda/7.1
kernel.x86_64              3.10.0-327.el7            @ol7_latest
kernel.x86_64              3.10.0-1062.9.1.el7       @ol7_latest
kernel-devel.x86_64        3.10.0-229.el7            @ol7_latest
kernel-devel.x86_64        3.10.0-327.el7            @ol7_latest
kernel-devel.x86_64        3.10.0-1062.9.1.el7       @ol7_latest
kernel-headers.x86_64      3.10.0-1062.9.1.el7       @ol7_latest
kernel-tools.x86_64        3.10.0-1062.9.1.el7       @ol7_latest
kernel-tools-libs.x86_64   3.10.0-1062.9.1.el7       @ol7_latest
kernel-uek.x86_64          3.8.13-55.1.6.el7uek      @anaconda/7.1
kernel-uek.x86_64          3.8.13-118.2.1.el7uek     @ol7_UEKR3
kernel-uek.x86_64          4.14.35-1902.8.4.el7uek   @ol7_UEKR5
kernel-uek-devel.x86_64    3.8.13-118.2.1.el7uek     @ol7_UEKR3
kernel-uek-devel.x86_64    4.14.35-1902.8.4.el7uek   @ol7_UEKR5
kernel-uek-firmware.noarch 3.8.13-55.1.6.el7uek      @anaconda/7.1
kernel-uek-firmware.noarch 3.8.13-118.2.1.el7uek     @ol7_UEKR3

kernel.x86_64,  3.8.13 and 3.10.0  are no longer required, so they can be deleted. YUM will skip the currently running kernel, so we can use below command to uninstall the unwanted kernels.

[root@racnode1 ~]# yum remove kernel*
[root@racnode1 ~]# yum list installed kernel*
Loaded plugins: ulninfo
Installed Packages
kernel-uek.x86_64 4.14.35-1902.8.4.el7uek @ol7_UEKR5
[root@racnode1 ~]#