Project Archive // Click any entry to expand

Mission_Log

5
Projects
3
Categories
3
Certs
Cloud
Cloud
Fully deployed AWS architecture provisioned end-to-end with Terraform.
AWS Infra Deploy

Provisioned a full AWS environment — VPC, S3 static deployment, DynamoDB, and least-privilege IAM roles enforced end to end.

TerraformAWS VPCS3DynamoDBIAM
// Meta
ProviderAWS
IaCTerraform
Regionus-east-1
StatusDeployed
// Architecture
ResourceServiceStatus
sakora-vpcVPCACTIVE
sakora-bucketS3DEPLOYED
sakora-dbDynamoDBACTIVE
sakora-roleIAMENFORCED
// What I Built
01Initialised Terraform & provider config — project structure, AWS provider block, target region declared.
02Built VPC & subnet architecture — 10.0.0.0/16 CIDR, public/private subnets, Internet Gateway.
03Deployed static site to S3 — bucket with website hosting, public read policy.
04Configured DynamoDB — on-demand billing, point-in-time recovery, scoped to private subnet.
05Enforced least-privilege IAM — scoped inline policies, no wildcard permissions.
// Code
iam.tf — Terraform
resource "aws_iam_policy" "s3_policy" {
  name = "sakora-s3-policy"
  policy = jsonencode({
    Statement = [{
      Effect = "Allow"
      Action = ["s3:GetObject", "s3:PutObject"]
      Resource = "arn:aws:s3:::sakora-bucket/*"
    }]
  })
}
// Screenshots
AWS Console — VPC
Terraform Apply
S3 Static Hosting
IAM Policy View
// Challenges & Learnings
Challenges
IAM Policy Scope CreepStarted with broad ARNs — iterated until permissions matched minimum required per service.
S3 Public Access BlocksAWS blocks public access by default — had to configure block settings alongside the bucket policy.
Learnings
IaC DisciplineEvery resource in code is tracked, repeatable, and reviewable.
Least Privilege in PracticeWriting scoped IAM policies by hand gave a much deeper understanding of the AWS permission model.
Security
Security
Wazuh SIEM for threat detection and VM isolation via Linux bridges.
Wazuh SIEM

Full SIEM deployment — Wazuh agents on Windows and Linux endpoints, real-time alerting on auth events, and custom detection rules tuned against lab attack simulations.

WazuhElastic StackFilebeatSysmonWindows Event Log
// Status
PlatformLinux
AgentsWin + Linux
StatusMonitoring
// What I Built
01Wazuh manager + Elastic Stack — deployed on Ubuntu, integrated with Elasticsearch and Kibana for dashboards.
02Agent deployment — Wazuh agents on Windows (with Sysmon) and Linux endpoints forwarding all logs to manager.
03Custom detection rules — XML rules for brute force, lateral movement, and privilege escalation patterns.
04Active response — automated firewall blocks triggered on high-severity SSH brute force alerts.
// Alert Feed
$ wazuh-logtest --alert-level 7
ALERT 10: Multiple failed SSH logins on 10.0.0.15 ⚠
ALERT 12: Mimikatz signature on WIN-DC01 ⚠
INFO: 4624 Logon — user sakora — ALLOWED
// Challenges & Learnings
Challenges
Log Noise VolumeRaw Windows event forwarding too verbose — required Sysmon config tuning and custom rule filters.
Sysmon ConfigDefault config too broad — adopted a community config and trimmed further for lab-scale volumes.
Learnings
Detection EngineeringWriting custom rules taught how defenders model attacker behaviour and what makes a good signal.
Log Pipeline InternalsEnd-to-end understanding of event flow: endpoint → Filebeat → Wazuh → Elastic.
VM Isolation

Network-isolated VMs using Linux bridges — dedicated bridge segments per traffic role with iptables DROP rules enforcing strict inter-segment isolation for safe lab simulations.

Linux BridgesKVM/QEMUiptablesnetplanip link
// Network Map
BridgeVMsRole
br-mgmtdc, siemManagement
br-labworkloadsLab Net
br-isolatedmalwareAir-Gap
// What I Built
01Linux bridge topology — named bridges per traffic role: management, lab, and isolated/air-gap segments.
02iptables isolation rules — explicit DROP rules on FORWARD chain blocking all inter-bridge traffic.
03Air-gapped malware segment — br-isolated has no upstream route, safe for running malware samples.
04Verification testing — confirmed isolation with ping/traceroute and Wireshark packet captures.
// Challenges & Learnings
Challenges
Bridge Forwarding DefaultLinux bridges forward between all interfaces by default — had to explicitly DROP on FORWARD chain per segment pair.
Persistent Rulesiptables rules don't survive reboot — resolved with iptables-persistent and systemd service ordering.
Learnings
Linux Networking InternalsHands-on with bridges, namespaces, and iptables gave far deeper understanding than docs or simulators.
Defence by DesignDesigning network topology before deploying VMs is far easier than retrofitting isolation after the fact.
Home Lab
Home Lab
Enterprise infra simulation and VM provisioning automated with Terraform.
Enterprise Infra Stack

Active Directory domain with GPO enforcement and a minimal Kubernetes cluster — simulating an on-prem enterprise environment for IAM, policy management, and container orchestration practice.

Active DirectoryKubernetesGPOkubeadmWindows Server
// Status
Active DirOnline
KubernetesRunning
Domainsakora.lab
// What I Built
01Active Directory domain — Windows Server DC, OUs structured by department, GPOs for password policy and desktop lockdowns.
02Domain-joined workstations — Windows VMs joined to the domain, policy applied at login, users managed via AD Users & Computers.
03Kubernetes cluster (kubeadm) — minimal single control-plane, Flannel CNI, namespace isolation, basic workload scheduling.
04Attack & defend simulations — brute force, lateral movement, and privilege escalation exercises against the AD environment.
// Terminal
$ kubectl get nodes
k8s-ctrl   Ready  control-plane
k8s-wkr1   Ready  worker
---
PS> Get-ADUser -Filter * | Select Name
Administrator, g.sakora, svc_backup
// Challenges & Learnings
Challenges
AD DNS IssuesDomain joins failing because client DNS pointed to ISP, not the DC. Fixed via DNS forwarding on the DC.
K8s CNI ConflictsCNI plugin conflicts during kubeadm init — resolved by pre-pulling images and setting the correct pod CIDR.
Learnings
AD & Enterprise SecurityBuilding from scratch revealed how much of Windows security sits on AD — GPOs, Kerberos, and delegation.
Kubernetes InternalsManual bootstrapping gave far deeper understanding of control plane components than managed K8s ever would.
VM Automation

Entire VM lifecycle managed as code — Terraform with the libvirt provider declaratively provisions KVM VMs, injects cloud-init config at boot, and tears down cleanly for repeatable lab resets.

Terraformlibvirtcloud-initKVM/QEMUvirsh
// What I Built
01Terraform + libvirt provider — declarative VM definitions: vCPU, RAM, disk image, bridge attachment, and cloud-init volume.
02Cloud-init provisioning — hostname, SSH keys, user accounts, and packages injected at first boot without manual steps.
03Modular VM definitions — reusable Terraform modules for different VM roles (DC, workstation, SIEM) with variable-driven config.
04Snapshot-based lab resets — destroy and reprovision the full lab in under two minutes from a clean baseline.
// Code
vm.tf — Terraform/libvirt
resource "libvirt_domain" "lab_vm" {
  name = "worker-01"
  vcpu = 2
  memory = 2048
  network_interface {
    bridge = "br-lab"
  }
  cloudinit = libvirt_cloudinit_disk.ci.id
}
// Challenges & Learnings
Challenges
Cloud-init Race ConditionsNetwork not ready before cloud-init ran — fixed with systemd dependency ordering in the unit file.
libvirt Provider QuirksVolume and domain resources needed explicit dependency ordering — depends_on required in several places.
Learnings
IaC for Home LabsTreating VMs as code means fully reproducible environments — destroy and reprovision the entire lab in seconds.
cloud-init InternalsUnderstanding user-data, meta-data, and network-config gave much more control over first-boot state.